00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 985 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3647 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.081 The recommended git tool is: git 00:00:00.081 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.184 Using shallow fetch with depth 1 00:00:00.184 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.184 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.267 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.267 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.621 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.632 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.644 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.644 > git config core.sparsecheckout # timeout=10 00:00:04.654 > git read-tree -mu HEAD # timeout=10 00:00:04.668 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.690 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.690 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.790 [Pipeline] Start of Pipeline 00:00:04.805 [Pipeline] library 00:00:04.807 Loading library shm_lib@master 00:00:04.807 Library shm_lib@master is cached. Copying from home. 00:00:04.825 [Pipeline] node 00:00:04.838 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.840 [Pipeline] { 00:00:04.852 [Pipeline] catchError 00:00:04.853 [Pipeline] { 00:00:04.868 [Pipeline] wrap 00:00:04.878 [Pipeline] { 00:00:04.884 [Pipeline] stage 00:00:04.885 [Pipeline] { (Prologue) 00:00:04.898 [Pipeline] echo 00:00:04.899 Node: VM-host-SM38 00:00:04.903 [Pipeline] cleanWs 00:00:04.912 [WS-CLEANUP] Deleting project workspace... 00:00:04.912 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.920 [WS-CLEANUP] done 00:00:05.108 [Pipeline] setCustomBuildProperty 00:00:05.166 [Pipeline] httpRequest 00:00:05.836 [Pipeline] echo 00:00:05.838 Sorcerer 10.211.164.20 is alive 00:00:05.846 [Pipeline] retry 00:00:05.848 [Pipeline] { 00:00:05.861 [Pipeline] httpRequest 00:00:05.866 HttpMethod: GET 00:00:05.867 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.868 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.874 Response Code: HTTP/1.1 200 OK 00:00:05.875 Success: Status code 200 is in the accepted range: 200,404 00:00:05.875 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.482 [Pipeline] } 00:00:06.494 [Pipeline] // retry 00:00:06.500 [Pipeline] sh 00:00:06.781 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.795 [Pipeline] httpRequest 00:00:07.126 [Pipeline] echo 00:00:07.128 Sorcerer 10.211.164.20 is alive 00:00:07.136 [Pipeline] retry 00:00:07.138 [Pipeline] { 00:00:07.151 [Pipeline] httpRequest 00:00:07.155 HttpMethod: GET 00:00:07.156 URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:07.156 Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:07.164 Response Code: HTTP/1.1 200 OK 00:00:07.165 Success: Status code 200 is in the accepted range: 200,404 00:00:07.165 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:49.769 [Pipeline] } 00:00:49.787 [Pipeline] // retry 00:00:49.794 [Pipeline] sh 00:00:50.082 + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz 00:00:52.635 [Pipeline] sh 00:00:52.922 + git -C spdk log --oneline -n5 00:00:52.922 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:00:52.922 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:00:52.922 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:52.922 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:52.922 029355612 bdev_ut: add manual examine bdev unit test case 00:00:52.943 [Pipeline] withCredentials 00:00:52.956 > git --version # timeout=10 00:00:52.970 > git --version # 'git version 2.39.2' 00:00:52.989 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:52.992 [Pipeline] { 00:00:53.002 [Pipeline] retry 00:00:53.004 [Pipeline] { 00:00:53.019 [Pipeline] sh 00:00:53.306 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:53.320 [Pipeline] } 00:00:53.342 [Pipeline] // retry 00:00:53.352 [Pipeline] } 00:00:53.395 [Pipeline] // withCredentials 00:00:53.403 [Pipeline] httpRequest 00:00:53.782 [Pipeline] echo 00:00:53.784 Sorcerer 10.211.164.20 is alive 00:00:53.795 [Pipeline] retry 00:00:53.797 [Pipeline] { 00:00:53.812 [Pipeline] httpRequest 00:00:53.818 HttpMethod: GET 00:00:53.818 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:53.819 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:53.834 Response Code: HTTP/1.1 200 OK 00:00:53.834 Success: Status code 200 is in the accepted range: 200,404 00:00:53.835 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:17.074 [Pipeline] } 00:01:17.094 [Pipeline] // retry 00:01:17.103 [Pipeline] sh 00:01:17.390 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:18.792 [Pipeline] sh 00:01:19.070 + git -C dpdk log --oneline -n5 00:01:19.070 eeb0605f11 version: 23.11.0 00:01:19.070 238778122a doc: update release notes for 23.11 00:01:19.070 46aa6b3cfc doc: fix description of RSS features 00:01:19.070 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:19.070 7e421ae345 devtools: support skipping forbid rule check 00:01:19.087 [Pipeline] writeFile 00:01:19.100 [Pipeline] sh 00:01:19.386 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:19.401 [Pipeline] sh 00:01:19.689 + cat autorun-spdk.conf 00:01:19.689 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.689 SPDK_TEST_NVME=1 00:01:19.689 SPDK_TEST_FTL=1 00:01:19.689 SPDK_TEST_ISAL=1 00:01:19.689 SPDK_RUN_ASAN=1 00:01:19.689 SPDK_RUN_UBSAN=1 00:01:19.689 SPDK_TEST_XNVME=1 00:01:19.689 SPDK_TEST_NVME_FDP=1 00:01:19.689 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:19.689 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:19.689 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.698 RUN_NIGHTLY=1 00:01:19.700 [Pipeline] } 00:01:19.714 [Pipeline] // stage 00:01:19.730 [Pipeline] stage 00:01:19.733 [Pipeline] { (Run VM) 00:01:19.745 [Pipeline] sh 00:01:20.031 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.032 + echo 'Start stage prepare_nvme.sh' 00:01:20.032 Start stage prepare_nvme.sh 00:01:20.032 + [[ -n 3 ]] 00:01:20.032 + disk_prefix=ex3 00:01:20.032 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:20.032 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:20.032 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:20.032 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.032 ++ SPDK_TEST_NVME=1 00:01:20.032 ++ SPDK_TEST_FTL=1 00:01:20.032 ++ SPDK_TEST_ISAL=1 00:01:20.032 ++ SPDK_RUN_ASAN=1 00:01:20.032 ++ SPDK_RUN_UBSAN=1 00:01:20.032 ++ SPDK_TEST_XNVME=1 00:01:20.032 ++ SPDK_TEST_NVME_FDP=1 00:01:20.032 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:20.032 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:20.032 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.032 ++ RUN_NIGHTLY=1 00:01:20.032 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:20.032 + nvme_files=() 00:01:20.032 + declare -A nvme_files 00:01:20.032 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.032 + nvme_files['nvme.img']=5G 00:01:20.032 + nvme_files['nvme-cmb.img']=5G 00:01:20.032 + nvme_files['nvme-multi0.img']=4G 00:01:20.032 + nvme_files['nvme-multi1.img']=4G 00:01:20.032 + nvme_files['nvme-multi2.img']=4G 00:01:20.032 + nvme_files['nvme-openstack.img']=8G 00:01:20.032 + nvme_files['nvme-zns.img']=5G 00:01:20.032 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.032 + (( SPDK_TEST_FTL == 1 )) 00:01:20.032 + nvme_files["nvme-ftl.img"]=6G 00:01:20.032 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.032 + nvme_files["nvme-fdp.img"]=1G 00:01:20.032 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.032 + for nvme in "${!nvme_files[@]}" 00:01:20.032 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:20.032 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.032 + for nvme in "${!nvme_files[@]}" 00:01:20.032 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:20.977 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.977 + for nvme in "${!nvme_files[@]}" 00:01:20.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:20.977 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.977 + for nvme in "${!nvme_files[@]}" 00:01:20.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:20.977 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.977 + for nvme in "${!nvme_files[@]}" 00:01:20.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:21.550 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.550 + for nvme in "${!nvme_files[@]}" 00:01:21.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:21.550 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.550 + for nvme in "${!nvme_files[@]}" 00:01:21.550 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:21.550 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.812 + for nvme in "${!nvme_files[@]}" 00:01:21.812 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:21.812 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:21.812 + for nvme in "${!nvme_files[@]}" 00:01:21.812 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:22.763 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:22.763 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:22.763 + echo 'End stage prepare_nvme.sh' 00:01:22.763 End stage prepare_nvme.sh 00:01:22.777 [Pipeline] sh 00:01:23.063 + DISTRO=fedora39 00:01:23.063 + CPUS=10 00:01:23.063 + RAM=12288 00:01:23.063 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:23.063 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:23.063 00:01:23.063 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:23.063 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:23.063 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:23.063 HELP=0 00:01:23.063 DRY_RUN=0 00:01:23.063 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:23.063 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:23.063 NVME_AUTO_CREATE=0 00:01:23.063 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:23.063 NVME_CMB=,,,, 00:01:23.063 NVME_PMR=,,,, 00:01:23.063 NVME_ZNS=,,,, 00:01:23.063 NVME_MS=true,,,, 00:01:23.063 NVME_FDP=,,,on, 00:01:23.064 SPDK_VAGRANT_DISTRO=fedora39 00:01:23.064 SPDK_VAGRANT_VMCPU=10 00:01:23.064 SPDK_VAGRANT_VMRAM=12288 00:01:23.064 SPDK_VAGRANT_PROVIDER=libvirt 00:01:23.064 SPDK_VAGRANT_HTTP_PROXY= 00:01:23.064 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:23.064 SPDK_OPENSTACK_NETWORK=0 00:01:23.064 VAGRANT_PACKAGE_BOX=0 00:01:23.064 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:23.064 FORCE_DISTRO=true 00:01:23.064 VAGRANT_BOX_VERSION= 00:01:23.064 EXTRA_VAGRANTFILES= 00:01:23.064 NIC_MODEL=e1000 00:01:23.064 00:01:23.064 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:23.064 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:25.615 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.877 ==> default: Creating image (snapshot of base box volume). 00:01:26.139 ==> default: Creating domain with the following settings... 00:01:26.139 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732058051_ed094b6dd25e28950ddf 00:01:26.139 ==> default: -- Domain type: kvm 00:01:26.139 ==> default: -- Cpus: 10 00:01:26.139 ==> default: -- Feature: acpi 00:01:26.139 ==> default: -- Feature: apic 00:01:26.139 ==> default: -- Feature: pae 00:01:26.139 ==> default: -- Memory: 12288M 00:01:26.139 ==> default: -- Memory Backing: hugepages: 00:01:26.139 ==> default: -- Management MAC: 00:01:26.139 ==> default: -- Loader: 00:01:26.139 ==> default: -- Nvram: 00:01:26.139 ==> default: -- Base box: spdk/fedora39 00:01:26.139 ==> default: -- Storage pool: default 00:01:26.140 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732058051_ed094b6dd25e28950ddf.img (20G) 00:01:26.140 ==> default: -- Volume Cache: default 00:01:26.140 ==> default: -- Kernel: 00:01:26.140 ==> default: -- Initrd: 00:01:26.140 ==> default: -- Graphics Type: vnc 00:01:26.140 ==> default: -- Graphics Port: -1 00:01:26.140 ==> default: -- Graphics IP: 127.0.0.1 00:01:26.140 ==> default: -- Graphics Password: Not defined 00:01:26.140 ==> default: -- Video Type: cirrus 00:01:26.140 ==> default: -- Video VRAM: 9216 00:01:26.140 ==> default: -- Sound Type: 00:01:26.140 ==> default: -- Keymap: en-us 00:01:26.140 ==> default: -- TPM Path: 00:01:26.140 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:26.140 ==> default: -- Command line args: 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:26.140 ==> default: -> value=-drive, 00:01:26.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:26.140 ==> default: -> value=-drive, 00:01:26.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:26.140 ==> default: -> value=-drive, 00:01:26.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.140 ==> default: -> value=-drive, 00:01:26.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.140 ==> default: -> value=-drive, 00:01:26.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:26.140 ==> default: -> value=-drive, 00:01:26.140 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:26.140 ==> default: -> value=-device, 00:01:26.140 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:26.140 ==> default: Creating shared folders metadata... 00:01:26.402 ==> default: Starting domain. 00:01:28.318 ==> default: Waiting for domain to get an IP address... 00:01:43.322 ==> default: Waiting for SSH to become available... 00:01:43.322 ==> default: Configuring and enabling network interfaces... 00:01:47.527 default: SSH address: 192.168.121.105:22 00:01:47.527 default: SSH username: vagrant 00:01:47.527 default: SSH auth method: private key 00:01:48.914 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.063 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:02.365 ==> default: Mounting SSHFS shared folder... 00:02:04.277 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.277 ==> default: Checking Mount.. 00:02:05.663 ==> default: Folder Successfully Mounted! 00:02:05.663 00:02:05.663 SUCCESS! 00:02:05.663 00:02:05.663 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.663 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.663 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.663 00:02:05.674 [Pipeline] } 00:02:05.689 [Pipeline] // stage 00:02:05.698 [Pipeline] dir 00:02:05.698 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:05.700 [Pipeline] { 00:02:05.714 [Pipeline] catchError 00:02:05.716 [Pipeline] { 00:02:05.729 [Pipeline] sh 00:02:06.013 + vagrant ssh-config --host vagrant 00:02:06.013 + sed -ne '/^Host/,$p' 00:02:06.013 + tee ssh_conf 00:02:08.569 Host vagrant 00:02:08.569 HostName 192.168.121.105 00:02:08.569 User vagrant 00:02:08.569 Port 22 00:02:08.569 UserKnownHostsFile /dev/null 00:02:08.569 StrictHostKeyChecking no 00:02:08.569 PasswordAuthentication no 00:02:08.569 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:08.569 IdentitiesOnly yes 00:02:08.569 LogLevel FATAL 00:02:08.569 ForwardAgent yes 00:02:08.569 ForwardX11 yes 00:02:08.569 00:02:08.585 [Pipeline] withEnv 00:02:08.587 [Pipeline] { 00:02:08.600 [Pipeline] sh 00:02:08.884 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:08.884 source /etc/os-release 00:02:08.884 [[ -e /image.version ]] && img=$(< /image.version) 00:02:08.884 # Minimal, systemd-like check. 00:02:08.884 if [[ -e /.dockerenv ]]; then 00:02:08.884 # Clear garbage from the node'\''s name: 00:02:08.884 # agt-er_autotest_547-896 -> autotest_547-896 00:02:08.884 # $HOSTNAME is the actual container id 00:02:08.884 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:08.884 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:08.884 # We can assume this is a mount from a host where container is running, 00:02:08.884 # so fetch its hostname to easily identify the target swarm worker. 00:02:08.884 container="$(< /etc/hostname) ($agent)" 00:02:08.884 else 00:02:08.884 # Fallback 00:02:08.884 container=$agent 00:02:08.884 fi 00:02:08.884 fi 00:02:08.884 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:08.884 ' 00:02:09.155 [Pipeline] } 00:02:09.169 [Pipeline] // withEnv 00:02:09.175 [Pipeline] setCustomBuildProperty 00:02:09.188 [Pipeline] stage 00:02:09.191 [Pipeline] { (Tests) 00:02:09.207 [Pipeline] sh 00:02:09.491 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.791 [Pipeline] sh 00:02:10.072 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.351 [Pipeline] timeout 00:02:10.352 Timeout set to expire in 50 min 00:02:10.354 [Pipeline] { 00:02:10.368 [Pipeline] sh 00:02:10.733 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:11.306 HEAD is now at f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:02:11.319 [Pipeline] sh 00:02:11.600 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:11.874 [Pipeline] sh 00:02:12.159 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.466 [Pipeline] sh 00:02:12.746 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:13.008 ++ readlink -f spdk_repo 00:02:13.008 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.008 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.008 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.008 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.008 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.008 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:13.008 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:13.008 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:13.008 + cd /home/vagrant/spdk_repo 00:02:13.008 + source /etc/os-release 00:02:13.008 ++ NAME='Fedora Linux' 00:02:13.008 ++ VERSION='39 (Cloud Edition)' 00:02:13.008 ++ ID=fedora 00:02:13.008 ++ VERSION_ID=39 00:02:13.008 ++ VERSION_CODENAME= 00:02:13.008 ++ PLATFORM_ID=platform:f39 00:02:13.008 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:13.008 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.008 ++ LOGO=fedora-logo-icon 00:02:13.008 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:13.008 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.008 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:13.008 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.008 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.008 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.008 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:13.008 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.008 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:13.008 ++ SUPPORT_END=2024-11-12 00:02:13.008 ++ VARIANT='Cloud Edition' 00:02:13.008 ++ VARIANT_ID=cloud 00:02:13.008 + uname -a 00:02:13.008 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:13.008 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:13.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:13.531 Hugepages 00:02:13.531 node hugesize free / total 00:02:13.531 node0 1048576kB 0 / 0 00:02:13.531 node0 2048kB 0 / 0 00:02:13.531 00:02:13.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.531 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:13.531 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:13.531 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:13.531 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:13.793 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:13.793 + rm -f /tmp/spdk-ld-path 00:02:13.793 + source autorun-spdk.conf 00:02:13.793 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.793 ++ SPDK_TEST_NVME=1 00:02:13.793 ++ SPDK_TEST_FTL=1 00:02:13.793 ++ SPDK_TEST_ISAL=1 00:02:13.793 ++ SPDK_RUN_ASAN=1 00:02:13.793 ++ SPDK_RUN_UBSAN=1 00:02:13.793 ++ SPDK_TEST_XNVME=1 00:02:13.793 ++ SPDK_TEST_NVME_FDP=1 00:02:13.793 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:13.793 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:13.793 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.793 ++ RUN_NIGHTLY=1 00:02:13.793 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:13.793 + [[ -n '' ]] 00:02:13.793 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:13.793 + for M in /var/spdk/build-*-manifest.txt 00:02:13.793 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:13.793 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.793 + for M in /var/spdk/build-*-manifest.txt 00:02:13.793 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.793 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.793 + for M in /var/spdk/build-*-manifest.txt 00:02:13.793 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.793 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.793 ++ uname 00:02:13.793 + [[ Linux == \L\i\n\u\x ]] 00:02:13.793 + sudo dmesg -T 00:02:13.793 + sudo dmesg --clear 00:02:13.793 + dmesg_pid=5761 00:02:13.793 + sudo dmesg -Tw 00:02:13.793 + [[ Fedora Linux == FreeBSD ]] 00:02:13.793 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.793 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:13.793 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:13.793 + [[ -x /usr/src/fio-static/fio ]] 00:02:13.793 + export FIO_BIN=/usr/src/fio-static/fio 00:02:13.793 + FIO_BIN=/usr/src/fio-static/fio 00:02:13.793 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:13.793 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:13.793 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:13.793 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.793 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:13.793 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:13.793 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.793 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:13.793 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.793 23:14:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:13.793 23:14:59 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:13.793 23:14:59 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.793 23:14:59 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:13.793 23:14:59 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:13.793 23:14:59 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:13.793 23:14:59 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:13.793 23:14:59 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:13.794 23:14:59 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:13.794 23:14:59 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:13.794 23:14:59 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:13.794 23:14:59 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:13.794 23:14:59 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.794 23:14:59 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:13.794 23:14:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:13.794 23:14:59 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.055 23:15:00 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:14.055 23:15:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.055 23:15:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:14.055 23:15:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.055 23:15:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.055 23:15:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.056 23:15:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.056 23:15:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.056 23:15:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.056 23:15:00 -- paths/export.sh@5 -- $ export PATH 00:02:14.056 23:15:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.056 23:15:00 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.056 23:15:00 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:14.056 23:15:00 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732058100.XXXXXX 00:02:14.056 23:15:00 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732058100.XHxzBc 00:02:14.056 23:15:00 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:14.056 23:15:00 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:14.056 23:15:00 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.056 23:15:00 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:14.056 23:15:00 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.056 23:15:00 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.056 23:15:00 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:14.056 23:15:00 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:14.056 23:15:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.056 23:15:00 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:14.056 23:15:00 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:14.056 23:15:00 -- pm/common@17 -- $ local monitor 00:02:14.056 23:15:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.056 23:15:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.056 23:15:00 -- pm/common@25 -- $ sleep 1 00:02:14.056 23:15:00 -- pm/common@21 -- $ date +%s 00:02:14.056 23:15:00 -- pm/common@21 -- $ date +%s 00:02:14.056 23:15:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732058100 00:02:14.056 23:15:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732058100 00:02:14.056 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732058100_collect-cpu-load.pm.log 00:02:14.056 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732058100_collect-vmstat.pm.log 00:02:14.999 23:15:01 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:14.999 23:15:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.999 23:15:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.999 23:15:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:14.999 23:15:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.999 Tue Nov 19 11:15:01 PM UTC 2024 00:02:14.999 23:15:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.999 v25.01-pre-199-gf22e807f1 00:02:14.999 23:15:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:14.999 23:15:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:14.999 23:15:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.999 23:15:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.999 23:15:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.999 ************************************ 00:02:14.999 START TEST asan 00:02:14.999 ************************************ 00:02:14.999 using asan 00:02:14.999 23:15:01 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:14.999 00:02:14.999 real 0m0.000s 00:02:14.999 user 0m0.000s 00:02:14.999 sys 0m0.000s 00:02:14.999 23:15:01 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:14.999 ************************************ 00:02:14.999 END TEST asan 00:02:14.999 ************************************ 00:02:14.999 23:15:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.261 23:15:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:15.261 23:15:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:15.261 23:15:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.261 23:15:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.261 23:15:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.261 ************************************ 00:02:15.261 START TEST ubsan 00:02:15.261 ************************************ 00:02:15.261 23:15:01 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:15.261 using ubsan 00:02:15.261 00:02:15.261 real 0m0.000s 00:02:15.261 user 0m0.000s 00:02:15.261 sys 0m0.000s 00:02:15.261 23:15:01 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.261 23:15:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.261 ************************************ 00:02:15.261 END TEST ubsan 00:02:15.261 ************************************ 00:02:15.261 23:15:01 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:15.261 23:15:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:15.261 23:15:01 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:15.261 23:15:01 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:15.261 23:15:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.261 23:15:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.261 ************************************ 00:02:15.261 START TEST build_native_dpdk 00:02:15.261 ************************************ 00:02:15.261 23:15:01 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:15.261 eeb0605f11 version: 23.11.0 00:02:15.261 238778122a doc: update release notes for 23.11 00:02:15.261 46aa6b3cfc doc: fix description of RSS features 00:02:15.261 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:15.261 7e421ae345 devtools: support skipping forbid rule check 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:15.261 23:15:01 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:15.261 23:15:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:15.262 patching file config/rte_config.h 00:02:15.262 Hunk #1 succeeded at 60 (offset 1 line). 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:15.262 patching file lib/pcapng/rte_pcapng.c 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:15.262 23:15:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:15.262 23:15:01 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:20.556 The Meson build system 00:02:20.556 Version: 1.5.0 00:02:20.556 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:20.556 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:20.556 Build type: native build 00:02:20.556 Program cat found: YES (/usr/bin/cat) 00:02:20.556 Project name: DPDK 00:02:20.556 Project version: 23.11.0 00:02:20.556 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:20.556 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:20.556 Host machine cpu family: x86_64 00:02:20.556 Host machine cpu: x86_64 00:02:20.557 Message: ## Building in Developer Mode ## 00:02:20.557 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.557 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:20.557 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.557 Program python3 found: YES (/usr/bin/python3) 00:02:20.557 Program cat found: YES (/usr/bin/cat) 00:02:20.557 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:20.557 Compiler for C supports arguments -march=native: YES 00:02:20.557 Checking for size of "void *" : 8 00:02:20.557 Checking for size of "void *" : 8 (cached) 00:02:20.557 Library m found: YES 00:02:20.557 Library numa found: YES 00:02:20.557 Has header "numaif.h" : YES 00:02:20.557 Library fdt found: NO 00:02:20.557 Library execinfo found: NO 00:02:20.557 Has header "execinfo.h" : YES 00:02:20.557 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:20.557 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.557 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.557 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.557 Run-time dependency openssl found: YES 3.1.1 00:02:20.557 Run-time dependency libpcap found: YES 1.10.4 00:02:20.557 Has header "pcap.h" with dependency libpcap: YES 00:02:20.557 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.557 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.557 Compiler for C supports arguments -Wformat: YES 00:02:20.557 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:20.557 Compiler for C supports arguments -Wformat-security: NO 00:02:20.557 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.557 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.557 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.557 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.557 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.557 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.557 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.557 Compiler for C supports arguments -Wundef: YES 00:02:20.557 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.557 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.557 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:20.557 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.557 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:20.557 Program objdump found: YES (/usr/bin/objdump) 00:02:20.557 Compiler for C supports arguments -mavx512f: YES 00:02:20.557 Checking if "AVX512 checking" compiles: YES 00:02:20.557 Fetching value of define "__SSE4_2__" : 1 00:02:20.557 Fetching value of define "__AES__" : 1 00:02:20.557 Fetching value of define "__AVX__" : 1 00:02:20.557 Fetching value of define "__AVX2__" : 1 00:02:20.557 Fetching value of define "__AVX512BW__" : 1 00:02:20.557 Fetching value of define "__AVX512CD__" : 1 00:02:20.557 Fetching value of define "__AVX512DQ__" : 1 00:02:20.557 Fetching value of define "__AVX512F__" : 1 00:02:20.557 Fetching value of define "__AVX512VL__" : 1 00:02:20.557 Fetching value of define "__PCLMUL__" : 1 00:02:20.557 Fetching value of define "__RDRND__" : 1 00:02:20.557 Fetching value of define "__RDSEED__" : 1 00:02:20.557 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:20.557 Fetching value of define "__znver1__" : (undefined) 00:02:20.557 Fetching value of define "__znver2__" : (undefined) 00:02:20.557 Fetching value of define "__znver3__" : (undefined) 00:02:20.557 Fetching value of define "__znver4__" : (undefined) 00:02:20.557 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:20.557 Message: lib/log: Defining dependency "log" 00:02:20.557 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.557 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.557 Checking for function "getentropy" : NO 00:02:20.557 Message: lib/eal: Defining dependency "eal" 00:02:20.557 Message: lib/ring: Defining dependency "ring" 00:02:20.557 Message: lib/rcu: Defining dependency "rcu" 00:02:20.557 Message: lib/mempool: Defining dependency "mempool" 00:02:20.557 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.557 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:20.557 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:20.557 Compiler for C supports arguments -mpclmul: YES 00:02:20.557 Compiler for C supports arguments -maes: YES 00:02:20.557 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.557 Compiler for C supports arguments -mavx512bw: YES 00:02:20.557 Compiler for C supports arguments -mavx512dq: YES 00:02:20.557 Compiler for C supports arguments -mavx512vl: YES 00:02:20.557 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.557 Compiler for C supports arguments -mavx2: YES 00:02:20.557 Compiler for C supports arguments -mavx: YES 00:02:20.557 Message: lib/net: Defining dependency "net" 00:02:20.557 Message: lib/meter: Defining dependency "meter" 00:02:20.557 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.557 Message: lib/pci: Defining dependency "pci" 00:02:20.557 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.557 Message: lib/metrics: Defining dependency "metrics" 00:02:20.557 Message: lib/hash: Defining dependency "hash" 00:02:20.557 Message: lib/timer: Defining dependency "timer" 00:02:20.557 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.557 Message: lib/acl: Defining dependency "acl" 00:02:20.557 Message: lib/bbdev: Defining dependency "bbdev" 00:02:20.557 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:20.557 Run-time dependency libelf found: YES 0.191 00:02:20.557 Message: lib/bpf: Defining dependency "bpf" 00:02:20.557 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:20.557 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.557 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.557 Message: lib/distributor: Defining dependency "distributor" 00:02:20.557 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.557 Message: lib/efd: Defining dependency "efd" 00:02:20.557 Message: lib/eventdev: Defining dependency "eventdev" 00:02:20.557 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:20.557 Message: lib/gpudev: Defining dependency "gpudev" 00:02:20.557 Message: lib/gro: Defining dependency "gro" 00:02:20.557 Message: lib/gso: Defining dependency "gso" 00:02:20.557 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:20.557 Message: lib/jobstats: Defining dependency "jobstats" 00:02:20.557 Message: lib/latencystats: Defining dependency "latencystats" 00:02:20.557 Message: lib/lpm: Defining dependency "lpm" 00:02:20.557 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512IFMA__" : 1 00:02:20.557 Message: lib/member: Defining dependency "member" 00:02:20.557 Message: lib/pcapng: Defining dependency "pcapng" 00:02:20.557 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.557 Message: lib/power: Defining dependency "power" 00:02:20.557 Message: lib/rawdev: Defining dependency "rawdev" 00:02:20.557 Message: lib/regexdev: Defining dependency "regexdev" 00:02:20.557 Message: lib/mldev: Defining dependency "mldev" 00:02:20.557 Message: lib/rib: Defining dependency "rib" 00:02:20.557 Message: lib/reorder: Defining dependency "reorder" 00:02:20.557 Message: lib/sched: Defining dependency "sched" 00:02:20.557 Message: lib/security: Defining dependency "security" 00:02:20.557 Message: lib/stack: Defining dependency "stack" 00:02:20.557 Has header "linux/userfaultfd.h" : YES 00:02:20.557 Has header "linux/vduse.h" : YES 00:02:20.557 Message: lib/vhost: Defining dependency "vhost" 00:02:20.557 Message: lib/ipsec: Defining dependency "ipsec" 00:02:20.557 Message: lib/pdcp: Defining dependency "pdcp" 00:02:20.557 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.557 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.557 Message: lib/fib: Defining dependency "fib" 00:02:20.557 Message: lib/port: Defining dependency "port" 00:02:20.557 Message: lib/pdump: Defining dependency "pdump" 00:02:20.557 Message: lib/table: Defining dependency "table" 00:02:20.557 Message: lib/pipeline: Defining dependency "pipeline" 00:02:20.557 Message: lib/graph: Defining dependency "graph" 00:02:20.557 Message: lib/node: Defining dependency "node" 00:02:20.558 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.558 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.558 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.558 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.508 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:21.508 Compiler for C supports arguments -Wno-unused-value: YES 00:02:21.508 Compiler for C supports arguments -Wno-format: YES 00:02:21.508 Compiler for C supports arguments -Wno-format-security: YES 00:02:21.508 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.508 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.508 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.508 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.508 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.508 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.508 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.508 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.508 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.508 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.508 Has header "sys/epoll.h" : YES 00:02:21.508 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.508 Configuring doxy-api-html.conf using configuration 00:02:21.508 Configuring doxy-api-man.conf using configuration 00:02:21.508 Program mandb found: YES (/usr/bin/mandb) 00:02:21.508 Program sphinx-build found: NO 00:02:21.508 Configuring rte_build_config.h using configuration 00:02:21.508 Message: 00:02:21.508 ================= 00:02:21.508 Applications Enabled 00:02:21.508 ================= 00:02:21.508 00:02:21.508 apps: 00:02:21.508 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:21.508 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:21.508 test-pmd, test-regex, test-sad, test-security-perf, 00:02:21.508 00:02:21.508 Message: 00:02:21.508 ================= 00:02:21.508 Libraries Enabled 00:02:21.508 ================= 00:02:21.508 00:02:21.508 libs: 00:02:21.508 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.508 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:21.508 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:21.508 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:21.508 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:21.508 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:21.508 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:21.508 00:02:21.508 00:02:21.508 Message: 00:02:21.508 =============== 00:02:21.508 Drivers Enabled 00:02:21.508 =============== 00:02:21.508 00:02:21.508 common: 00:02:21.508 00:02:21.508 bus: 00:02:21.508 pci, vdev, 00:02:21.508 mempool: 00:02:21.508 ring, 00:02:21.508 dma: 00:02:21.508 00:02:21.508 net: 00:02:21.508 i40e, 00:02:21.508 raw: 00:02:21.508 00:02:21.508 crypto: 00:02:21.508 00:02:21.508 compress: 00:02:21.508 00:02:21.508 regex: 00:02:21.508 00:02:21.508 ml: 00:02:21.508 00:02:21.508 vdpa: 00:02:21.508 00:02:21.508 event: 00:02:21.508 00:02:21.508 baseband: 00:02:21.508 00:02:21.508 gpu: 00:02:21.508 00:02:21.508 00:02:21.508 Message: 00:02:21.508 ================= 00:02:21.508 Content Skipped 00:02:21.508 ================= 00:02:21.508 00:02:21.508 apps: 00:02:21.508 00:02:21.508 libs: 00:02:21.508 00:02:21.508 drivers: 00:02:21.508 common/cpt: not in enabled drivers build config 00:02:21.508 common/dpaax: not in enabled drivers build config 00:02:21.508 common/iavf: not in enabled drivers build config 00:02:21.508 common/idpf: not in enabled drivers build config 00:02:21.508 common/mvep: not in enabled drivers build config 00:02:21.508 common/octeontx: not in enabled drivers build config 00:02:21.508 bus/auxiliary: not in enabled drivers build config 00:02:21.508 bus/cdx: not in enabled drivers build config 00:02:21.508 bus/dpaa: not in enabled drivers build config 00:02:21.508 bus/fslmc: not in enabled drivers build config 00:02:21.508 bus/ifpga: not in enabled drivers build config 00:02:21.508 bus/platform: not in enabled drivers build config 00:02:21.508 bus/vmbus: not in enabled drivers build config 00:02:21.508 common/cnxk: not in enabled drivers build config 00:02:21.508 common/mlx5: not in enabled drivers build config 00:02:21.508 common/nfp: not in enabled drivers build config 00:02:21.508 common/qat: not in enabled drivers build config 00:02:21.508 common/sfc_efx: not in enabled drivers build config 00:02:21.508 mempool/bucket: not in enabled drivers build config 00:02:21.508 mempool/cnxk: not in enabled drivers build config 00:02:21.508 mempool/dpaa: not in enabled drivers build config 00:02:21.508 mempool/dpaa2: not in enabled drivers build config 00:02:21.508 mempool/octeontx: not in enabled drivers build config 00:02:21.508 mempool/stack: not in enabled drivers build config 00:02:21.508 dma/cnxk: not in enabled drivers build config 00:02:21.508 dma/dpaa: not in enabled drivers build config 00:02:21.508 dma/dpaa2: not in enabled drivers build config 00:02:21.508 dma/hisilicon: not in enabled drivers build config 00:02:21.508 dma/idxd: not in enabled drivers build config 00:02:21.508 dma/ioat: not in enabled drivers build config 00:02:21.508 dma/skeleton: not in enabled drivers build config 00:02:21.508 net/af_packet: not in enabled drivers build config 00:02:21.508 net/af_xdp: not in enabled drivers build config 00:02:21.508 net/ark: not in enabled drivers build config 00:02:21.508 net/atlantic: not in enabled drivers build config 00:02:21.508 net/avp: not in enabled drivers build config 00:02:21.508 net/axgbe: not in enabled drivers build config 00:02:21.508 net/bnx2x: not in enabled drivers build config 00:02:21.508 net/bnxt: not in enabled drivers build config 00:02:21.508 net/bonding: not in enabled drivers build config 00:02:21.508 net/cnxk: not in enabled drivers build config 00:02:21.508 net/cpfl: not in enabled drivers build config 00:02:21.508 net/cxgbe: not in enabled drivers build config 00:02:21.508 net/dpaa: not in enabled drivers build config 00:02:21.508 net/dpaa2: not in enabled drivers build config 00:02:21.508 net/e1000: not in enabled drivers build config 00:02:21.508 net/ena: not in enabled drivers build config 00:02:21.508 net/enetc: not in enabled drivers build config 00:02:21.508 net/enetfec: not in enabled drivers build config 00:02:21.508 net/enic: not in enabled drivers build config 00:02:21.508 net/failsafe: not in enabled drivers build config 00:02:21.508 net/fm10k: not in enabled drivers build config 00:02:21.508 net/gve: not in enabled drivers build config 00:02:21.508 net/hinic: not in enabled drivers build config 00:02:21.508 net/hns3: not in enabled drivers build config 00:02:21.508 net/iavf: not in enabled drivers build config 00:02:21.509 net/ice: not in enabled drivers build config 00:02:21.509 net/idpf: not in enabled drivers build config 00:02:21.509 net/igc: not in enabled drivers build config 00:02:21.509 net/ionic: not in enabled drivers build config 00:02:21.509 net/ipn3ke: not in enabled drivers build config 00:02:21.509 net/ixgbe: not in enabled drivers build config 00:02:21.509 net/mana: not in enabled drivers build config 00:02:21.509 net/memif: not in enabled drivers build config 00:02:21.509 net/mlx4: not in enabled drivers build config 00:02:21.509 net/mlx5: not in enabled drivers build config 00:02:21.509 net/mvneta: not in enabled drivers build config 00:02:21.509 net/mvpp2: not in enabled drivers build config 00:02:21.509 net/netvsc: not in enabled drivers build config 00:02:21.509 net/nfb: not in enabled drivers build config 00:02:21.509 net/nfp: not in enabled drivers build config 00:02:21.509 net/ngbe: not in enabled drivers build config 00:02:21.509 net/null: not in enabled drivers build config 00:02:21.509 net/octeontx: not in enabled drivers build config 00:02:21.509 net/octeon_ep: not in enabled drivers build config 00:02:21.509 net/pcap: not in enabled drivers build config 00:02:21.509 net/pfe: not in enabled drivers build config 00:02:21.509 net/qede: not in enabled drivers build config 00:02:21.509 net/ring: not in enabled drivers build config 00:02:21.509 net/sfc: not in enabled drivers build config 00:02:21.509 net/softnic: not in enabled drivers build config 00:02:21.509 net/tap: not in enabled drivers build config 00:02:21.509 net/thunderx: not in enabled drivers build config 00:02:21.509 net/txgbe: not in enabled drivers build config 00:02:21.509 net/vdev_netvsc: not in enabled drivers build config 00:02:21.509 net/vhost: not in enabled drivers build config 00:02:21.509 net/virtio: not in enabled drivers build config 00:02:21.509 net/vmxnet3: not in enabled drivers build config 00:02:21.509 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.509 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.509 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.509 raw/ifpga: not in enabled drivers build config 00:02:21.509 raw/ntb: not in enabled drivers build config 00:02:21.509 raw/skeleton: not in enabled drivers build config 00:02:21.509 crypto/armv8: not in enabled drivers build config 00:02:21.509 crypto/bcmfs: not in enabled drivers build config 00:02:21.509 crypto/caam_jr: not in enabled drivers build config 00:02:21.509 crypto/ccp: not in enabled drivers build config 00:02:21.509 crypto/cnxk: not in enabled drivers build config 00:02:21.509 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.509 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.509 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.509 crypto/mlx5: not in enabled drivers build config 00:02:21.509 crypto/mvsam: not in enabled drivers build config 00:02:21.509 crypto/nitrox: not in enabled drivers build config 00:02:21.509 crypto/null: not in enabled drivers build config 00:02:21.509 crypto/octeontx: not in enabled drivers build config 00:02:21.509 crypto/openssl: not in enabled drivers build config 00:02:21.509 crypto/scheduler: not in enabled drivers build config 00:02:21.509 crypto/uadk: not in enabled drivers build config 00:02:21.509 crypto/virtio: not in enabled drivers build config 00:02:21.509 compress/isal: not in enabled drivers build config 00:02:21.509 compress/mlx5: not in enabled drivers build config 00:02:21.509 compress/octeontx: not in enabled drivers build config 00:02:21.509 compress/zlib: not in enabled drivers build config 00:02:21.509 regex/mlx5: not in enabled drivers build config 00:02:21.509 regex/cn9k: not in enabled drivers build config 00:02:21.509 ml/cnxk: not in enabled drivers build config 00:02:21.509 vdpa/ifc: not in enabled drivers build config 00:02:21.509 vdpa/mlx5: not in enabled drivers build config 00:02:21.509 vdpa/nfp: not in enabled drivers build config 00:02:21.509 vdpa/sfc: not in enabled drivers build config 00:02:21.509 event/cnxk: not in enabled drivers build config 00:02:21.509 event/dlb2: not in enabled drivers build config 00:02:21.509 event/dpaa: not in enabled drivers build config 00:02:21.509 event/dpaa2: not in enabled drivers build config 00:02:21.509 event/dsw: not in enabled drivers build config 00:02:21.509 event/opdl: not in enabled drivers build config 00:02:21.509 event/skeleton: not in enabled drivers build config 00:02:21.509 event/sw: not in enabled drivers build config 00:02:21.509 event/octeontx: not in enabled drivers build config 00:02:21.509 baseband/acc: not in enabled drivers build config 00:02:21.509 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.509 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.509 baseband/la12xx: not in enabled drivers build config 00:02:21.509 baseband/null: not in enabled drivers build config 00:02:21.509 baseband/turbo_sw: not in enabled drivers build config 00:02:21.509 gpu/cuda: not in enabled drivers build config 00:02:21.509 00:02:21.509 00:02:21.509 Build targets in project: 215 00:02:21.509 00:02:21.509 DPDK 23.11.0 00:02:21.509 00:02:21.509 User defined options 00:02:21.509 libdir : lib 00:02:21.509 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:21.509 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:21.509 c_link_args : 00:02:21.509 enable_docs : false 00:02:21.509 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:21.509 enable_kmods : false 00:02:21.509 machine : native 00:02:21.509 tests : false 00:02:21.509 00:02:21.509 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.509 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:21.509 23:15:07 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:21.509 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:21.509 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.509 [2/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.509 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.509 [4/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.769 [5/705] Linking static target lib/librte_kvargs.a 00:02:21.769 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.769 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.769 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.769 [9/705] Linking static target lib/librte_log.a 00:02:21.769 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.769 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.029 [12/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.029 [13/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.029 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.029 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.029 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.029 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.029 [18/705] Linking target lib/librte_log.so.24.0 00:02:22.291 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.291 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.291 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.291 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.291 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.291 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.291 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.555 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:22.555 [27/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.555 [28/705] Linking target lib/librte_kvargs.so.24.0 00:02:22.555 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.555 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.555 [31/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:22.555 [32/705] Linking static target lib/librte_telemetry.a 00:02:22.555 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.555 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.822 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.822 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.822 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.822 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.822 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.822 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.822 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.822 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.822 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.822 [44/705] Linking target lib/librte_telemetry.so.24.0 00:02:23.084 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.084 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.084 [47/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:23.084 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.084 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.084 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.344 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.344 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.344 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.344 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.344 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.344 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.344 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.344 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.344 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.344 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.344 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.344 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.605 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.605 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.605 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.605 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.605 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.605 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.605 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.605 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.605 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.867 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.867 [73/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.867 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.867 [75/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.867 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:23.867 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.867 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.867 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.128 [80/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.128 [81/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.128 [82/705] Linking static target lib/librte_ring.a 00:02:24.128 [83/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.128 [84/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.128 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.128 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.128 [87/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.388 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.388 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.388 [90/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.388 [91/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.388 [92/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.388 [93/705] Linking static target lib/librte_eal.a 00:02:24.388 [94/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.388 [95/705] Linking static target lib/librte_rcu.a 00:02:24.649 [96/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.649 [97/705] Linking static target lib/librte_mempool.a 00:02:24.649 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.649 [99/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.649 [100/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.649 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.649 [102/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.649 [103/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.649 [104/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.910 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.910 [106/705] Linking static target lib/librte_meter.a 00:02:24.910 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:24.910 [108/705] Linking static target lib/librte_net.a 00:02:24.910 [109/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.910 [110/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.910 [111/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:25.171 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.171 [113/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.171 [114/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.171 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.171 [116/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.171 [117/705] Linking static target lib/librte_mbuf.a 00:02:25.432 [118/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.432 [119/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.693 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:25.693 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.693 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.693 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.693 [124/705] Linking static target lib/librte_pci.a 00:02:25.693 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.954 [126/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.954 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.954 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.954 [129/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.954 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.954 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.954 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.954 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:25.954 [134/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.954 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:26.215 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.215 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.215 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.215 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.215 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.215 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:26.215 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:26.215 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:26.215 [144/705] Linking static target lib/librte_cmdline.a 00:02:26.215 [145/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.476 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:26.476 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:26.476 [148/705] Linking static target lib/librte_metrics.a 00:02:26.476 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:26.735 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.735 [151/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:26.735 [152/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:26.735 [153/705] Linking static target lib/librte_timer.a 00:02:26.994 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.994 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:26.994 [156/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.252 [157/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:27.252 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:27.252 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:27.510 [160/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:27.510 [161/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:27.510 [162/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:27.510 [163/705] Linking static target lib/librte_bitratestats.a 00:02:27.767 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.767 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:27.767 [166/705] Linking static target lib/librte_bbdev.a 00:02:27.767 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:27.767 [168/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:27.767 [169/705] Linking static target lib/acl/libavx2_tmp.a 00:02:28.024 [170/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:28.024 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:28.024 [172/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:28.282 [173/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:28.282 [174/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.282 [175/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.282 [176/705] Linking static target lib/librte_ethdev.a 00:02:28.282 [177/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:28.282 [178/705] Linking static target lib/librte_cfgfile.a 00:02:28.282 [179/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:28.541 [180/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:28.541 [181/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:28.541 [182/705] Linking static target lib/librte_hash.a 00:02:28.541 [183/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.541 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:28.800 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:28.800 [186/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:28.800 [187/705] Linking static target lib/librte_bpf.a 00:02:28.800 [188/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.800 [189/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.800 [190/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.800 [191/705] Linking target lib/librte_eal.so.24.0 00:02:28.800 [192/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.800 [193/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:29.064 [194/705] Linking target lib/librte_ring.so.24.0 00:02:29.064 [195/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.064 [196/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.064 [197/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.064 [198/705] Linking target lib/librte_meter.so.24.0 00:02:29.064 [199/705] Linking target lib/librte_pci.so.24.0 00:02:29.064 [200/705] Linking target lib/librte_timer.so.24.0 00:02:29.064 [201/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:29.064 [202/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:29.064 [203/705] Linking static target lib/librte_acl.a 00:02:29.064 [204/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:29.064 [205/705] Linking target lib/librte_rcu.so.24.0 00:02:29.064 [206/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:29.064 [207/705] Linking target lib/librte_mempool.so.24.0 00:02:29.064 [208/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:29.064 [209/705] Linking target lib/librte_cfgfile.so.24.0 00:02:29.064 [210/705] Linking static target lib/librte_compressdev.a 00:02:29.064 [211/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:29.326 [212/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:29.326 [213/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:29.326 [214/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:29.326 [215/705] Linking target lib/librte_mbuf.so.24.0 00:02:29.326 [216/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.326 [217/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:29.326 [218/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:29.326 [219/705] Linking target lib/librte_acl.so.24.0 00:02:29.326 [220/705] Linking target lib/librte_bbdev.so.24.0 00:02:29.326 [221/705] Linking target lib/librte_net.so.24.0 00:02:29.326 [222/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:29.583 [223/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.583 [224/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:29.583 [225/705] Linking static target lib/librte_distributor.a 00:02:29.583 [226/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:29.583 [227/705] Linking target lib/librte_compressdev.so.24.0 00:02:29.584 [228/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:29.584 [229/705] Linking target lib/librte_cmdline.so.24.0 00:02:29.584 [230/705] Linking target lib/librte_hash.so.24.0 00:02:29.584 [231/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:29.584 [232/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:29.584 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.584 [234/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.584 [235/705] Linking static target lib/librte_dmadev.a 00:02:29.584 [236/705] Linking target lib/librte_distributor.so.24.0 00:02:29.853 [237/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.853 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:29.853 [239/705] Linking target lib/librte_dmadev.so.24.0 00:02:30.112 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:30.112 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:30.112 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:30.112 [243/705] Linking static target lib/librte_efd.a 00:02:30.371 [244/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.371 [245/705] Linking target lib/librte_efd.so.24.0 00:02:30.371 [246/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:30.371 [247/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:30.371 [248/705] Linking static target lib/librte_dispatcher.a 00:02:30.631 [249/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:30.631 [250/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:30.631 [251/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.631 [252/705] Linking static target lib/librte_cryptodev.a 00:02:30.631 [253/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.631 [254/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:30.890 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:30.890 [256/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:30.890 [257/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:30.890 [258/705] Linking static target lib/librte_gpudev.a 00:02:30.890 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:31.148 [260/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:31.148 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:31.148 [262/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:31.148 [263/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:31.404 [264/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:31.405 [265/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:31.405 [266/705] Linking static target lib/librte_gro.a 00:02:31.405 [267/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:31.405 [268/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:31.405 [269/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.405 [270/705] Linking target lib/librte_gpudev.so.24.0 00:02:31.405 [271/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:31.662 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:31.662 [273/705] Linking static target lib/librte_gso.a 00:02:31.662 [274/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.662 [275/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.662 [276/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.662 [277/705] Linking target lib/librte_ethdev.so.24.0 00:02:31.662 [278/705] Linking target lib/librte_cryptodev.so.24.0 00:02:31.662 [279/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.662 [280/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:31.662 [281/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:31.662 [282/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:31.662 [283/705] Linking target lib/librte_metrics.so.24.0 00:02:31.920 [284/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:31.920 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:31.920 [286/705] Linking target lib/librte_bpf.so.24.0 00:02:31.920 [287/705] Linking target lib/librte_gro.so.24.0 00:02:31.920 [288/705] Linking target lib/librte_gso.so.24.0 00:02:31.920 [289/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:31.920 [290/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:31.920 [291/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:31.920 [292/705] Linking target lib/librte_bitratestats.so.24.0 00:02:31.920 [293/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:31.920 [294/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:32.179 [295/705] Linking static target lib/librte_eventdev.a 00:02:32.179 [296/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:32.179 [297/705] Linking static target lib/librte_jobstats.a 00:02:32.179 [298/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:32.179 [299/705] Linking static target lib/librte_ip_frag.a 00:02:32.179 [300/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:32.179 [301/705] Linking static target lib/librte_latencystats.a 00:02:32.179 [302/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:32.179 [303/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.179 [304/705] Linking target lib/librte_jobstats.so.24.0 00:02:32.437 [305/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.437 [306/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.437 [307/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:32.437 [308/705] Linking target lib/librte_ip_frag.so.24.0 00:02:32.437 [309/705] Linking target lib/librte_latencystats.so.24.0 00:02:32.437 [310/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.437 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:32.437 [312/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:32.437 [313/705] Linking static target lib/librte_lpm.a 00:02:32.437 [314/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:32.437 [315/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.438 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:32.696 [317/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:32.696 [318/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.696 [319/705] Linking target lib/librte_lpm.so.24.0 00:02:32.696 [320/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:32.696 [321/705] Linking static target lib/librte_pcapng.a 00:02:32.696 [322/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:32.696 [323/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.964 [324/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:32.964 [325/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.964 [326/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:32.964 [327/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.964 [328/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:32.964 [329/705] Linking target lib/librte_pcapng.so.24.0 00:02:32.964 [330/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.964 [331/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.964 [332/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:33.226 [333/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:33.226 [334/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:33.226 [335/705] Linking static target lib/librte_member.a 00:02:33.226 [336/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:33.226 [337/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:33.226 [338/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:33.484 [339/705] Linking static target lib/librte_rawdev.a 00:02:33.484 [340/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:33.484 [341/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.484 [342/705] Linking static target lib/librte_regexdev.a 00:02:33.484 [343/705] Linking static target lib/librte_power.a 00:02:33.484 [344/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:33.484 [345/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.484 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:33.484 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:33.484 [348/705] Linking static target lib/librte_mldev.a 00:02:33.484 [349/705] Linking target lib/librte_member.so.24.0 00:02:33.484 [350/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.484 [351/705] Linking target lib/librte_eventdev.so.24.0 00:02:33.743 [352/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:33.743 [353/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.743 [354/705] Linking target lib/librte_dispatcher.so.24.0 00:02:33.743 [355/705] Linking target lib/librte_rawdev.so.24.0 00:02:33.743 [356/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:33.743 [357/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.743 [358/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:33.743 [359/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:33.743 [360/705] Linking target lib/librte_power.so.24.0 00:02:34.001 [361/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.001 [362/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:34.001 [363/705] Linking static target lib/librte_rib.a 00:02:34.001 [364/705] Linking target lib/librte_regexdev.so.24.0 00:02:34.001 [365/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:34.001 [366/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:34.001 [367/705] Linking static target lib/librte_reorder.a 00:02:34.001 [368/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.001 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:34.001 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:34.001 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:34.001 [372/705] Linking static target lib/librte_stack.a 00:02:34.258 [373/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.258 [374/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.258 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:34.258 [376/705] Linking static target lib/librte_security.a 00:02:34.258 [377/705] Linking target lib/librte_rib.so.24.0 00:02:34.258 [378/705] Linking target lib/librte_reorder.so.24.0 00:02:34.258 [379/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.258 [380/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:34.258 [381/705] Linking target lib/librte_stack.so.24.0 00:02:34.258 [382/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:34.258 [383/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.258 [384/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:34.258 [385/705] Linking target lib/librte_mldev.so.24.0 00:02:34.515 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:34.515 [387/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.515 [388/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.515 [389/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:34.515 [390/705] Linking static target lib/librte_sched.a 00:02:34.515 [391/705] Linking target lib/librte_security.so.24.0 00:02:34.773 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:34.773 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:34.773 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:34.773 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.773 [396/705] Linking target lib/librte_sched.so.24.0 00:02:35.031 [397/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:35.031 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:35.288 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:35.288 [400/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.288 [401/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.288 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:35.288 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:35.548 [404/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:35.548 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:35.548 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:35.548 [407/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:35.809 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:35.809 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:35.809 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:35.809 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:35.809 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:35.809 [413/705] Linking static target lib/librte_ipsec.a 00:02:36.070 [414/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.070 [415/705] Linking target lib/librte_ipsec.so.24.0 00:02:36.070 [416/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:36.070 [417/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:36.070 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:36.328 [419/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:36.328 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:36.328 [421/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:36.328 [422/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:36.328 [423/705] Linking static target lib/librte_fib.a 00:02:36.618 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:36.618 [425/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.618 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:36.618 [427/705] Linking target lib/librte_fib.so.24.0 00:02:36.879 [428/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:36.879 [429/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:36.879 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:36.879 [431/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:37.140 [432/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:37.140 [433/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:37.140 [434/705] Linking static target lib/librte_pdcp.a 00:02:37.140 [435/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:37.140 [436/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:37.405 [437/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.405 [438/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:37.405 [439/705] Linking target lib/librte_pdcp.so.24.0 00:02:37.405 [440/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:37.405 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:37.405 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:37.405 [443/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:37.405 [444/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:37.665 [445/705] Linking static target lib/librte_pdump.a 00:02:37.665 [446/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:37.665 [447/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:37.665 [448/705] Linking static target lib/librte_port.a 00:02:37.665 [449/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:37.665 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:37.665 [451/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.665 [452/705] Linking target lib/librte_pdump.so.24.0 00:02:37.925 [453/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:37.925 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.925 [455/705] Linking target lib/librte_port.so.24.0 00:02:38.252 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:38.252 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:38.252 [458/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:38.252 [459/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.252 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:38.252 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:38.517 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:38.517 [463/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:38.517 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:38.517 [465/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:38.517 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:38.517 [467/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:38.517 [468/705] Linking static target lib/librte_table.a 00:02:38.776 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:39.034 [470/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:39.034 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:39.034 [472/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:39.034 [473/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.034 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:39.034 [475/705] Linking target lib/librte_table.so.24.0 00:02:39.034 [476/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:39.294 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:39.294 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:39.294 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:39.294 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:39.554 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:39.554 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:39.554 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:39.554 [484/705] Linking static target lib/librte_graph.a 00:02:39.816 [485/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:39.816 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:39.817 [487/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:40.076 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:40.076 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.076 [490/705] Linking target lib/librte_graph.so.24.0 00:02:40.076 [491/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:40.076 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:40.076 [493/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:40.076 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:40.333 [495/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:40.333 [496/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:40.594 [497/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:40.594 [498/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:40.594 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.594 [500/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:40.594 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:40.594 [502/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:40.855 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:40.855 [504/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:40.855 [505/705] Linking static target lib/librte_node.a 00:02:40.855 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:40.855 [507/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:40.855 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:40.855 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:40.855 [510/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.117 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.117 [512/705] Linking target lib/librte_node.so.24.0 00:02:41.117 [513/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.117 [514/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.377 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.377 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:41.377 [517/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:41.377 [518/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:41.377 [519/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:41.377 [520/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.377 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.377 [522/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:41.377 [523/705] Linking static target drivers/librte_bus_pci.a 00:02:41.377 [524/705] Linking static target drivers/librte_bus_vdev.a 00:02:41.638 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:41.638 [526/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:41.638 [527/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.638 [528/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:41.638 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:41.638 [530/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:41.638 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:41.638 [532/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:41.638 [533/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:41.638 [534/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.638 [535/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:41.901 [536/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:41.901 [537/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.901 [538/705] Linking static target drivers/librte_mempool_ring.a 00:02:41.901 [539/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:41.901 [540/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:41.901 [541/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:42.162 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:42.162 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:42.423 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:42.423 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:42.993 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:42.993 [547/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:42.993 [548/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:42.993 [549/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:42.993 [550/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:42.993 [551/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:43.253 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:43.253 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:43.513 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:43.513 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:43.513 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:43.773 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:43.773 [558/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:44.033 [559/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:44.033 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:44.033 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:44.033 [562/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:44.294 [563/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:44.294 [564/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:44.294 [565/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:44.294 [566/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:44.294 [567/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:44.555 [568/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:44.555 [569/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:44.555 [570/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:44.555 [571/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:44.555 [572/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:44.816 [573/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:44.816 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:44.816 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:44.816 [576/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:44.816 [577/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:44.816 [578/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:45.076 [579/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:45.076 [580/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:45.076 [581/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.076 [582/705] Linking static target drivers/librte_net_i40e.a 00:02:45.076 [583/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:45.076 [584/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:45.076 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:45.336 [586/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:45.336 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:45.596 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.596 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:45.596 [590/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:45.596 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:45.596 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:45.855 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:45.855 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:45.855 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:45.855 [596/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:46.114 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:46.114 [598/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.114 [599/705] Linking static target lib/librte_vhost.a 00:02:46.114 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:46.374 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:46.374 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:46.374 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:46.374 [604/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:46.374 [605/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:46.374 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:46.633 [607/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:46.633 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:46.633 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:46.633 [610/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:46.633 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:46.633 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:46.892 [613/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.892 [614/705] Linking target lib/librte_vhost.so.24.0 00:02:46.892 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:46.892 [616/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:46.892 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:47.153 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:47.725 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:47.725 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:47.725 [621/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:47.725 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:47.725 [623/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:47.725 [624/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:47.725 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:47.725 [626/705] Linking static target lib/librte_pipeline.a 00:02:47.985 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:47.985 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:47.985 [629/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:47.985 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:47.985 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:48.245 [632/705] Linking target app/dpdk-dumpcap 00:02:48.245 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:48.245 [634/705] Linking target app/dpdk-graph 00:02:48.245 [635/705] Linking target app/dpdk-test-acl 00:02:48.245 [636/705] Linking target app/dpdk-proc-info 00:02:48.245 [637/705] Linking target app/dpdk-pdump 00:02:48.507 [638/705] Linking target app/dpdk-test-cmdline 00:02:48.507 [639/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:48.507 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:48.507 [641/705] Linking target app/dpdk-test-crypto-perf 00:02:48.507 [642/705] Linking target app/dpdk-test-compress-perf 00:02:48.507 [643/705] Linking target app/dpdk-test-dma-perf 00:02:48.768 [644/705] Linking target app/dpdk-test-gpudev 00:02:48.768 [645/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:48.768 [646/705] Linking target app/dpdk-test-fib 00:02:48.768 [647/705] Linking target app/dpdk-test-flow-perf 00:02:48.768 [648/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:48.768 [649/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:48.768 [650/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:49.029 [651/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:49.029 [652/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:49.029 [653/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:49.029 [654/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:49.029 [655/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:49.029 [656/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:49.291 [657/705] Linking target app/dpdk-test-eventdev 00:02:49.291 [658/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:49.291 [659/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:49.291 [660/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:49.551 [661/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:49.551 [662/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:49.551 [663/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:49.551 [664/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:49.551 [665/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:49.811 [666/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:49.811 [667/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:49.811 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:49.811 [669/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.811 [670/705] Linking target lib/librte_pipeline.so.24.0 00:02:49.811 [671/705] Linking target app/dpdk-test-bbdev 00:02:49.811 [672/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:50.071 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:50.071 [674/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:50.071 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:50.071 [676/705] Linking target app/dpdk-test-pipeline 00:02:50.331 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:50.331 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:50.331 [679/705] Linking target app/dpdk-test-mldev 00:02:50.331 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:50.331 [681/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:50.331 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:50.591 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:50.591 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:50.591 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:50.851 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:50.851 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:50.851 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:51.112 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:51.112 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:51.112 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:51.373 [692/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:51.373 [693/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:51.373 [694/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:51.635 [695/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:51.635 [696/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:51.635 [697/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:51.635 [698/705] Linking target app/dpdk-test-sad 00:02:51.896 [699/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:51.896 [700/705] Linking target app/dpdk-test-regex 00:02:51.896 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:51.896 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:51.896 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:52.158 [704/705] Linking target app/dpdk-test-security-perf 00:02:52.429 [705/705] Linking target app/dpdk-testpmd 00:02:52.429 23:15:38 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:52.429 23:15:38 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:52.429 23:15:38 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:52.429 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:52.429 [0/1] Installing files. 00:02:52.701 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.701 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.702 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.703 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.704 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.705 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:52.706 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:52.706 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.706 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.966 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.967 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.967 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.967 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:52.967 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:52.967 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.967 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.968 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:52.970 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:52.970 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:52.970 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:52.970 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:52.970 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:52.970 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:52.970 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:52.970 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:52.970 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:52.970 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:52.970 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:52.970 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:52.970 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:52.970 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:52.970 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:52.970 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:52.970 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:52.970 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:52.970 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:52.970 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:52.970 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:52.970 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:52.970 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:52.970 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:52.970 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:52.970 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:52.970 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:52.970 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:52.970 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:52.970 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:52.970 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:52.970 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:52.970 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:52.970 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:52.970 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:52.970 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:52.970 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:52.970 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:52.971 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:52.971 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:52.971 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:52.971 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:52.971 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:52.971 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:52.971 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:52.971 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:52.971 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:52.971 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:52.971 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:52.971 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:52.971 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:52.971 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:52.971 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:52.971 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:52.971 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:52.971 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:52.971 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:52.971 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:52.971 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:52.971 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:52.971 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:52.971 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:52.971 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:52.971 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:52.971 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:52.971 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:52.971 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:52.971 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:52.971 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:52.971 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:52.971 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:52.971 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:52.971 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:52.971 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:52.971 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:52.971 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:52.971 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:52.971 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:52.971 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:52.971 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:52.971 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:52.971 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:52.971 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:52.971 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:52.971 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:52.971 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:52.971 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:52.971 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:52.971 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:52.971 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:52.971 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:52.971 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:52.971 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:52.971 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:52.971 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:52.971 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:52.971 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:52.971 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:52.971 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:52.971 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:52.971 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:52.971 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:52.971 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:52.971 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:52.971 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:52.971 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:52.971 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:52.971 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:52.971 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:52.971 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:52.971 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:52.971 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:52.971 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:52.971 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:52.971 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:52.971 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:52.971 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:52.971 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:52.971 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:52.971 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:52.971 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:52.971 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:52.971 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:52.971 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:52.971 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:52.971 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:52.971 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:52.971 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:52.971 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:52.971 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:52.972 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:52.972 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:52.972 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:52.972 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:53.230 23:15:39 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:53.230 23:15:39 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:53.230 00:02:53.230 real 0m37.915s 00:02:53.230 user 4m21.900s 00:02:53.230 sys 0m38.460s 00:02:53.230 23:15:39 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.230 23:15:39 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:53.230 ************************************ 00:02:53.230 END TEST build_native_dpdk 00:02:53.230 ************************************ 00:02:53.230 23:15:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:53.230 23:15:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:53.230 23:15:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:53.230 23:15:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:53.230 23:15:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:53.230 23:15:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:53.230 23:15:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:53.230 23:15:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:53.230 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:53.230 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:53.230 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:53.230 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:54.170 Using 'verbs' RDMA provider 00:03:05.104 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:15.073 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:15.073 Creating mk/config.mk...done. 00:03:15.073 Creating mk/cc.flags.mk...done. 00:03:15.073 Type 'make' to build. 00:03:15.073 23:16:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:15.073 23:16:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:15.073 23:16:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:15.073 23:16:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:15.073 ************************************ 00:03:15.073 START TEST make 00:03:15.073 ************************************ 00:03:15.073 23:16:00 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:15.073 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:15.073 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:15.073 meson setup builddir \ 00:03:15.073 -Dwith-libaio=enabled \ 00:03:15.073 -Dwith-liburing=enabled \ 00:03:15.073 -Dwith-libvfn=disabled \ 00:03:15.073 -Dwith-spdk=disabled \ 00:03:15.073 -Dexamples=false \ 00:03:15.073 -Dtests=false \ 00:03:15.073 -Dtools=false && \ 00:03:15.073 meson compile -C builddir && \ 00:03:15.073 cd -) 00:03:15.331 make[1]: Nothing to be done for 'all'. 00:03:17.229 The Meson build system 00:03:17.229 Version: 1.5.0 00:03:17.229 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:17.229 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:17.229 Build type: native build 00:03:17.229 Project name: xnvme 00:03:17.229 Project version: 0.7.5 00:03:17.229 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:17.229 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:17.229 Host machine cpu family: x86_64 00:03:17.229 Host machine cpu: x86_64 00:03:17.229 Message: host_machine.system: linux 00:03:17.229 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:17.229 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:17.229 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:17.229 Run-time dependency threads found: YES 00:03:17.229 Has header "setupapi.h" : NO 00:03:17.229 Has header "linux/blkzoned.h" : YES 00:03:17.229 Has header "linux/blkzoned.h" : YES (cached) 00:03:17.230 Has header "libaio.h" : YES 00:03:17.230 Library aio found: YES 00:03:17.230 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:17.230 Run-time dependency liburing found: YES 2.2 00:03:17.230 Dependency libvfn skipped: feature with-libvfn disabled 00:03:17.230 Found CMake: /usr/bin/cmake (3.27.7) 00:03:17.230 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:17.230 Subproject spdk : skipped: feature with-spdk disabled 00:03:17.230 Run-time dependency appleframeworks found: NO (tried framework) 00:03:17.230 Run-time dependency appleframeworks found: NO (tried framework) 00:03:17.230 Library rt found: YES 00:03:17.230 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:17.230 Configuring xnvme_config.h using configuration 00:03:17.230 Configuring xnvme.spec using configuration 00:03:17.230 Run-time dependency bash-completion found: YES 2.11 00:03:17.230 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:17.230 Program cp found: YES (/usr/bin/cp) 00:03:17.230 Build targets in project: 3 00:03:17.230 00:03:17.230 xnvme 0.7.5 00:03:17.230 00:03:17.230 Subprojects 00:03:17.230 spdk : NO Feature 'with-spdk' disabled 00:03:17.230 00:03:17.230 User defined options 00:03:17.230 examples : false 00:03:17.230 tests : false 00:03:17.230 tools : false 00:03:17.230 with-libaio : enabled 00:03:17.230 with-liburing: enabled 00:03:17.230 with-libvfn : disabled 00:03:17.230 with-spdk : disabled 00:03:17.230 00:03:17.230 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:17.230 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:17.230 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:17.230 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:17.230 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:17.488 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:17.488 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:17.488 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:17.488 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:17.488 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:17.488 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:17.488 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:17.488 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:17.488 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:17.488 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:17.488 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:17.488 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:17.488 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:17.488 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:17.488 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:17.488 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:17.488 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:17.488 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:17.488 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:17.488 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:17.488 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:17.488 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:17.488 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:17.488 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:17.488 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:17.488 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:17.747 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:17.747 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:17.747 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:17.747 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:17.747 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:17.747 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:17.747 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:17.747 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:17.747 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:17.747 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:17.747 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:17.747 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:17.747 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:17.747 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:17.747 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:17.747 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:17.747 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:17.747 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:17.747 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:17.747 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:17.747 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:17.747 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:17.747 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:17.747 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:17.747 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:17.747 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:17.747 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:17.747 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:17.747 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:17.747 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:17.747 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:17.747 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:17.747 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:18.006 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:18.006 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:18.006 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:18.006 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:18.006 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:18.006 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:18.006 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:18.006 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:18.006 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:18.006 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:18.006 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:18.573 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:18.573 [75/76] Linking static target lib/libxnvme.a 00:03:18.573 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:18.573 INFO: autodetecting backend as ninja 00:03:18.573 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:18.573 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:50.681 CC lib/log/log.o 00:03:50.681 CC lib/log/log_flags.o 00:03:50.681 CC lib/log/log_deprecated.o 00:03:50.681 CC lib/ut/ut.o 00:03:50.681 CC lib/ut_mock/mock.o 00:03:50.681 LIB libspdk_ut.a 00:03:50.681 LIB libspdk_log.a 00:03:50.681 LIB libspdk_ut_mock.a 00:03:50.681 SO libspdk_ut.so.2.0 00:03:50.681 SO libspdk_log.so.7.1 00:03:50.681 SO libspdk_ut_mock.so.6.0 00:03:50.681 SYMLINK libspdk_ut.so 00:03:50.681 SYMLINK libspdk_ut_mock.so 00:03:50.681 SYMLINK libspdk_log.so 00:03:50.681 CC lib/ioat/ioat.o 00:03:50.681 CC lib/util/bit_array.o 00:03:50.681 CXX lib/trace_parser/trace.o 00:03:50.681 CC lib/util/base64.o 00:03:50.681 CC lib/util/crc16.o 00:03:50.681 CC lib/util/cpuset.o 00:03:50.681 CC lib/util/crc32.o 00:03:50.681 CC lib/util/crc32c.o 00:03:50.681 CC lib/dma/dma.o 00:03:50.681 CC lib/vfio_user/host/vfio_user_pci.o 00:03:50.681 CC lib/util/crc32_ieee.o 00:03:50.681 CC lib/util/crc64.o 00:03:50.681 CC lib/vfio_user/host/vfio_user.o 00:03:50.681 CC lib/util/dif.o 00:03:50.681 LIB libspdk_dma.a 00:03:50.681 CC lib/util/fd.o 00:03:50.681 CC lib/util/fd_group.o 00:03:50.681 SO libspdk_dma.so.5.0 00:03:50.681 CC lib/util/file.o 00:03:50.681 CC lib/util/hexlify.o 00:03:50.681 LIB libspdk_ioat.a 00:03:50.681 SYMLINK libspdk_dma.so 00:03:50.681 CC lib/util/iov.o 00:03:50.681 SO libspdk_ioat.so.7.0 00:03:50.681 CC lib/util/math.o 00:03:50.681 CC lib/util/net.o 00:03:50.681 SYMLINK libspdk_ioat.so 00:03:50.681 LIB libspdk_vfio_user.a 00:03:50.681 CC lib/util/pipe.o 00:03:50.681 CC lib/util/strerror_tls.o 00:03:50.681 SO libspdk_vfio_user.so.5.0 00:03:50.681 CC lib/util/string.o 00:03:50.681 SYMLINK libspdk_vfio_user.so 00:03:50.681 CC lib/util/uuid.o 00:03:50.681 CC lib/util/xor.o 00:03:50.681 CC lib/util/zipf.o 00:03:50.681 CC lib/util/md5.o 00:03:50.681 LIB libspdk_util.a 00:03:50.681 SO libspdk_util.so.10.1 00:03:50.681 LIB libspdk_trace_parser.a 00:03:50.681 SYMLINK libspdk_util.so 00:03:50.681 SO libspdk_trace_parser.so.6.0 00:03:50.942 SYMLINK libspdk_trace_parser.so 00:03:50.942 CC lib/idxd/idxd.o 00:03:50.942 CC lib/idxd/idxd_user.o 00:03:50.942 CC lib/rdma_utils/rdma_utils.o 00:03:50.942 CC lib/idxd/idxd_kernel.o 00:03:50.942 CC lib/conf/conf.o 00:03:50.942 CC lib/json/json_parse.o 00:03:50.942 CC lib/json/json_util.o 00:03:50.942 CC lib/vmd/vmd.o 00:03:50.942 CC lib/vmd/led.o 00:03:50.942 CC lib/env_dpdk/env.o 00:03:50.942 CC lib/json/json_write.o 00:03:50.942 CC lib/env_dpdk/memory.o 00:03:51.203 LIB libspdk_conf.a 00:03:51.203 CC lib/env_dpdk/pci.o 00:03:51.203 SO libspdk_conf.so.6.0 00:03:51.203 CC lib/env_dpdk/init.o 00:03:51.203 CC lib/env_dpdk/threads.o 00:03:51.203 LIB libspdk_rdma_utils.a 00:03:51.203 SO libspdk_rdma_utils.so.1.0 00:03:51.203 SYMLINK libspdk_conf.so 00:03:51.203 CC lib/env_dpdk/pci_ioat.o 00:03:51.203 SYMLINK libspdk_rdma_utils.so 00:03:51.203 CC lib/env_dpdk/pci_virtio.o 00:03:51.203 CC lib/env_dpdk/pci_vmd.o 00:03:51.203 LIB libspdk_json.a 00:03:51.203 CC lib/rdma_provider/common.o 00:03:51.203 SO libspdk_json.so.6.0 00:03:51.464 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:51.464 SYMLINK libspdk_json.so 00:03:51.464 CC lib/env_dpdk/pci_idxd.o 00:03:51.464 CC lib/env_dpdk/pci_event.o 00:03:51.464 LIB libspdk_vmd.a 00:03:51.464 SO libspdk_vmd.so.6.0 00:03:51.464 CC lib/env_dpdk/sigbus_handler.o 00:03:51.464 CC lib/env_dpdk/pci_dpdk.o 00:03:51.464 SYMLINK libspdk_vmd.so 00:03:51.464 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:51.464 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:51.464 LIB libspdk_idxd.a 00:03:51.464 SO libspdk_idxd.so.12.1 00:03:51.464 LIB libspdk_rdma_provider.a 00:03:51.464 SO libspdk_rdma_provider.so.7.0 00:03:51.464 SYMLINK libspdk_idxd.so 00:03:51.724 SYMLINK libspdk_rdma_provider.so 00:03:51.724 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:51.724 CC lib/jsonrpc/jsonrpc_server.o 00:03:51.724 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:51.724 CC lib/jsonrpc/jsonrpc_client.o 00:03:51.986 LIB libspdk_jsonrpc.a 00:03:51.986 SO libspdk_jsonrpc.so.6.0 00:03:51.986 SYMLINK libspdk_jsonrpc.so 00:03:52.247 CC lib/rpc/rpc.o 00:03:52.247 LIB libspdk_env_dpdk.a 00:03:52.247 SO libspdk_env_dpdk.so.15.1 00:03:52.508 LIB libspdk_rpc.a 00:03:52.508 SO libspdk_rpc.so.6.0 00:03:52.508 SYMLINK libspdk_env_dpdk.so 00:03:52.508 SYMLINK libspdk_rpc.so 00:03:52.768 CC lib/trace/trace_flags.o 00:03:52.768 CC lib/trace/trace.o 00:03:52.768 CC lib/trace/trace_rpc.o 00:03:52.768 CC lib/keyring/keyring.o 00:03:52.768 CC lib/keyring/keyring_rpc.o 00:03:52.768 CC lib/notify/notify.o 00:03:52.768 CC lib/notify/notify_rpc.o 00:03:52.768 LIB libspdk_notify.a 00:03:52.768 SO libspdk_notify.so.6.0 00:03:52.768 LIB libspdk_keyring.a 00:03:52.768 SYMLINK libspdk_notify.so 00:03:53.028 LIB libspdk_trace.a 00:03:53.028 SO libspdk_keyring.so.2.0 00:03:53.028 SO libspdk_trace.so.11.0 00:03:53.028 SYMLINK libspdk_keyring.so 00:03:53.028 SYMLINK libspdk_trace.so 00:03:53.288 CC lib/sock/sock.o 00:03:53.288 CC lib/sock/sock_rpc.o 00:03:53.288 CC lib/thread/iobuf.o 00:03:53.288 CC lib/thread/thread.o 00:03:53.547 LIB libspdk_sock.a 00:03:53.547 SO libspdk_sock.so.10.0 00:03:53.807 SYMLINK libspdk_sock.so 00:03:54.067 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:54.067 CC lib/nvme/nvme_ns_cmd.o 00:03:54.067 CC lib/nvme/nvme_ctrlr.o 00:03:54.067 CC lib/nvme/nvme_fabric.o 00:03:54.067 CC lib/nvme/nvme_ns.o 00:03:54.067 CC lib/nvme/nvme_pcie.o 00:03:54.067 CC lib/nvme/nvme_pcie_common.o 00:03:54.067 CC lib/nvme/nvme_qpair.o 00:03:54.067 CC lib/nvme/nvme.o 00:03:54.709 CC lib/nvme/nvme_quirks.o 00:03:54.709 CC lib/nvme/nvme_transport.o 00:03:54.709 CC lib/nvme/nvme_discovery.o 00:03:54.709 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:54.709 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:54.709 LIB libspdk_thread.a 00:03:54.709 CC lib/nvme/nvme_tcp.o 00:03:54.709 SO libspdk_thread.so.11.0 00:03:54.969 CC lib/nvme/nvme_opal.o 00:03:54.969 SYMLINK libspdk_thread.so 00:03:54.969 CC lib/nvme/nvme_io_msg.o 00:03:54.969 CC lib/nvme/nvme_poll_group.o 00:03:55.227 CC lib/nvme/nvme_zns.o 00:03:55.227 CC lib/nvme/nvme_stubs.o 00:03:55.227 CC lib/nvme/nvme_auth.o 00:03:55.227 CC lib/nvme/nvme_cuse.o 00:03:55.227 CC lib/nvme/nvme_rdma.o 00:03:55.484 CC lib/accel/accel.o 00:03:55.484 CC lib/accel/accel_rpc.o 00:03:55.484 CC lib/accel/accel_sw.o 00:03:55.741 CC lib/blob/blobstore.o 00:03:55.741 CC lib/init/json_config.o 00:03:55.998 CC lib/virtio/virtio.o 00:03:55.998 CC lib/init/subsystem.o 00:03:55.998 CC lib/init/subsystem_rpc.o 00:03:55.998 CC lib/init/rpc.o 00:03:55.998 CC lib/blob/request.o 00:03:56.255 CC lib/blob/zeroes.o 00:03:56.255 CC lib/blob/blob_bs_dev.o 00:03:56.255 CC lib/virtio/virtio_vhost_user.o 00:03:56.255 CC lib/fsdev/fsdev.o 00:03:56.255 LIB libspdk_init.a 00:03:56.255 SO libspdk_init.so.6.0 00:03:56.255 CC lib/fsdev/fsdev_io.o 00:03:56.255 CC lib/virtio/virtio_vfio_user.o 00:03:56.255 SYMLINK libspdk_init.so 00:03:56.255 CC lib/fsdev/fsdev_rpc.o 00:03:56.512 CC lib/virtio/virtio_pci.o 00:03:56.512 CC lib/event/app.o 00:03:56.512 CC lib/event/log_rpc.o 00:03:56.512 CC lib/event/reactor.o 00:03:56.512 CC lib/event/app_rpc.o 00:03:56.512 CC lib/event/scheduler_static.o 00:03:56.770 LIB libspdk_accel.a 00:03:56.770 LIB libspdk_nvme.a 00:03:56.770 SO libspdk_accel.so.16.0 00:03:56.770 LIB libspdk_virtio.a 00:03:56.770 SYMLINK libspdk_accel.so 00:03:56.770 SO libspdk_virtio.so.7.0 00:03:56.770 SO libspdk_nvme.so.15.0 00:03:56.770 LIB libspdk_fsdev.a 00:03:56.770 SYMLINK libspdk_virtio.so 00:03:56.770 SO libspdk_fsdev.so.2.0 00:03:56.770 SYMLINK libspdk_fsdev.so 00:03:57.029 CC lib/bdev/bdev.o 00:03:57.029 CC lib/bdev/bdev_zone.o 00:03:57.029 CC lib/bdev/bdev_rpc.o 00:03:57.029 CC lib/bdev/part.o 00:03:57.029 CC lib/bdev/scsi_nvme.o 00:03:57.029 LIB libspdk_event.a 00:03:57.029 SO libspdk_event.so.14.0 00:03:57.029 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:57.029 SYMLINK libspdk_nvme.so 00:03:57.029 SYMLINK libspdk_event.so 00:03:57.594 LIB libspdk_fuse_dispatcher.a 00:03:57.594 SO libspdk_fuse_dispatcher.so.1.0 00:03:57.594 SYMLINK libspdk_fuse_dispatcher.so 00:03:58.967 LIB libspdk_blob.a 00:03:58.967 SO libspdk_blob.so.11.0 00:03:59.226 LIB libspdk_bdev.a 00:03:59.226 SYMLINK libspdk_blob.so 00:03:59.226 SO libspdk_bdev.so.17.0 00:03:59.226 SYMLINK libspdk_bdev.so 00:03:59.226 CC lib/lvol/lvol.o 00:03:59.226 CC lib/blobfs/blobfs.o 00:03:59.226 CC lib/blobfs/tree.o 00:03:59.484 CC lib/scsi/dev.o 00:03:59.484 CC lib/scsi/lun.o 00:03:59.484 CC lib/nvmf/ctrlr.o 00:03:59.484 CC lib/scsi/port.o 00:03:59.484 CC lib/ftl/ftl_core.o 00:03:59.484 CC lib/nbd/nbd.o 00:03:59.484 CC lib/ublk/ublk.o 00:03:59.484 CC lib/ublk/ublk_rpc.o 00:03:59.484 CC lib/nbd/nbd_rpc.o 00:03:59.740 CC lib/ftl/ftl_init.o 00:03:59.740 CC lib/scsi/scsi.o 00:03:59.740 CC lib/nvmf/ctrlr_discovery.o 00:03:59.740 CC lib/scsi/scsi_bdev.o 00:03:59.740 CC lib/scsi/scsi_pr.o 00:03:59.740 LIB libspdk_nbd.a 00:03:59.740 CC lib/ftl/ftl_layout.o 00:03:59.740 SO libspdk_nbd.so.7.0 00:03:59.740 CC lib/ftl/ftl_debug.o 00:03:59.998 SYMLINK libspdk_nbd.so 00:03:59.998 CC lib/scsi/scsi_rpc.o 00:03:59.998 CC lib/scsi/task.o 00:03:59.998 LIB libspdk_ublk.a 00:03:59.998 CC lib/ftl/ftl_io.o 00:03:59.998 CC lib/ftl/ftl_sb.o 00:03:59.998 SO libspdk_ublk.so.3.0 00:04:00.255 CC lib/nvmf/ctrlr_bdev.o 00:04:00.255 SYMLINK libspdk_ublk.so 00:04:00.255 CC lib/nvmf/subsystem.o 00:04:00.255 LIB libspdk_blobfs.a 00:04:00.255 SO libspdk_blobfs.so.10.0 00:04:00.255 CC lib/nvmf/nvmf.o 00:04:00.255 LIB libspdk_scsi.a 00:04:00.255 CC lib/ftl/ftl_l2p.o 00:04:00.255 SYMLINK libspdk_blobfs.so 00:04:00.255 CC lib/ftl/ftl_l2p_flat.o 00:04:00.255 CC lib/ftl/ftl_nv_cache.o 00:04:00.255 SO libspdk_scsi.so.9.0 00:04:00.255 LIB libspdk_lvol.a 00:04:00.255 CC lib/ftl/ftl_band.o 00:04:00.255 SO libspdk_lvol.so.10.0 00:04:00.256 SYMLINK libspdk_scsi.so 00:04:00.513 SYMLINK libspdk_lvol.so 00:04:00.513 CC lib/ftl/ftl_band_ops.o 00:04:00.513 CC lib/nvmf/nvmf_rpc.o 00:04:00.513 CC lib/iscsi/conn.o 00:04:00.513 CC lib/vhost/vhost.o 00:04:00.770 CC lib/ftl/ftl_writer.o 00:04:00.770 CC lib/iscsi/init_grp.o 00:04:00.770 CC lib/nvmf/transport.o 00:04:01.027 CC lib/nvmf/tcp.o 00:04:01.027 CC lib/nvmf/stubs.o 00:04:01.027 CC lib/nvmf/mdns_server.o 00:04:01.284 CC lib/iscsi/iscsi.o 00:04:01.284 CC lib/nvmf/rdma.o 00:04:01.284 CC lib/ftl/ftl_rq.o 00:04:01.284 CC lib/nvmf/auth.o 00:04:01.284 CC lib/iscsi/param.o 00:04:01.540 CC lib/vhost/vhost_rpc.o 00:04:01.540 CC lib/ftl/ftl_reloc.o 00:04:01.540 CC lib/iscsi/portal_grp.o 00:04:01.540 CC lib/ftl/ftl_l2p_cache.o 00:04:01.540 CC lib/vhost/vhost_scsi.o 00:04:01.797 CC lib/iscsi/tgt_node.o 00:04:01.797 CC lib/iscsi/iscsi_subsystem.o 00:04:01.797 CC lib/vhost/vhost_blk.o 00:04:01.797 CC lib/ftl/ftl_p2l.o 00:04:02.055 CC lib/vhost/rte_vhost_user.o 00:04:02.055 CC lib/iscsi/iscsi_rpc.o 00:04:02.055 CC lib/iscsi/task.o 00:04:02.055 CC lib/ftl/ftl_p2l_log.o 00:04:02.055 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:02.312 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:02.569 LIB libspdk_iscsi.a 00:04:02.569 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:02.569 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:02.569 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:02.569 SO libspdk_iscsi.so.8.0 00:04:02.569 CC lib/ftl/utils/ftl_conf.o 00:04:02.569 CC lib/ftl/utils/ftl_md.o 00:04:02.569 CC lib/ftl/utils/ftl_mempool.o 00:04:02.569 CC lib/ftl/utils/ftl_bitmap.o 00:04:02.827 CC lib/ftl/utils/ftl_property.o 00:04:02.827 SYMLINK libspdk_iscsi.so 00:04:02.827 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:02.827 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:02.827 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:02.827 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:02.827 LIB libspdk_vhost.a 00:04:02.827 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:02.827 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:02.827 SO libspdk_vhost.so.8.0 00:04:02.827 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:02.827 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:03.083 SYMLINK libspdk_vhost.so 00:04:03.083 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:03.083 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:03.083 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:03.083 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:03.083 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:03.083 CC lib/ftl/base/ftl_base_dev.o 00:04:03.083 CC lib/ftl/base/ftl_base_bdev.o 00:04:03.083 CC lib/ftl/ftl_trace.o 00:04:03.340 LIB libspdk_ftl.a 00:04:03.340 LIB libspdk_nvmf.a 00:04:03.599 SO libspdk_ftl.so.9.0 00:04:03.599 SO libspdk_nvmf.so.20.0 00:04:03.599 SYMLINK libspdk_ftl.so 00:04:03.599 SYMLINK libspdk_nvmf.so 00:04:03.856 CC module/env_dpdk/env_dpdk_rpc.o 00:04:03.856 CC module/accel/iaa/accel_iaa.o 00:04:03.856 CC module/accel/dsa/accel_dsa.o 00:04:03.856 CC module/accel/error/accel_error.o 00:04:03.856 CC module/accel/ioat/accel_ioat.o 00:04:03.856 CC module/sock/posix/posix.o 00:04:03.856 CC module/blob/bdev/blob_bdev.o 00:04:03.856 CC module/keyring/file/keyring.o 00:04:03.856 CC module/fsdev/aio/fsdev_aio.o 00:04:04.114 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:04.114 LIB libspdk_env_dpdk_rpc.a 00:04:04.114 SO libspdk_env_dpdk_rpc.so.6.0 00:04:04.114 CC module/accel/iaa/accel_iaa_rpc.o 00:04:04.114 CC module/keyring/file/keyring_rpc.o 00:04:04.114 SYMLINK libspdk_env_dpdk_rpc.so 00:04:04.114 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:04.114 CC module/accel/error/accel_error_rpc.o 00:04:04.114 CC module/accel/ioat/accel_ioat_rpc.o 00:04:04.114 LIB libspdk_scheduler_dynamic.a 00:04:04.114 LIB libspdk_accel_iaa.a 00:04:04.114 LIB libspdk_keyring_file.a 00:04:04.114 CC module/fsdev/aio/linux_aio_mgr.o 00:04:04.114 SO libspdk_scheduler_dynamic.so.4.0 00:04:04.371 LIB libspdk_blob_bdev.a 00:04:04.371 SO libspdk_accel_iaa.so.3.0 00:04:04.371 SO libspdk_keyring_file.so.2.0 00:04:04.371 LIB libspdk_accel_error.a 00:04:04.371 SO libspdk_blob_bdev.so.11.0 00:04:04.371 CC module/accel/dsa/accel_dsa_rpc.o 00:04:04.371 SO libspdk_accel_error.so.2.0 00:04:04.371 SYMLINK libspdk_scheduler_dynamic.so 00:04:04.372 SYMLINK libspdk_keyring_file.so 00:04:04.372 SYMLINK libspdk_blob_bdev.so 00:04:04.372 SYMLINK libspdk_accel_iaa.so 00:04:04.372 LIB libspdk_accel_ioat.a 00:04:04.372 SYMLINK libspdk_accel_error.so 00:04:04.372 SO libspdk_accel_ioat.so.6.0 00:04:04.372 SYMLINK libspdk_accel_ioat.so 00:04:04.372 LIB libspdk_accel_dsa.a 00:04:04.372 CC module/keyring/linux/keyring.o 00:04:04.372 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:04.372 CC module/scheduler/gscheduler/gscheduler.o 00:04:04.372 SO libspdk_accel_dsa.so.5.0 00:04:04.629 SYMLINK libspdk_accel_dsa.so 00:04:04.629 CC module/keyring/linux/keyring_rpc.o 00:04:04.629 CC module/bdev/delay/vbdev_delay.o 00:04:04.629 CC module/bdev/error/vbdev_error.o 00:04:04.629 CC module/blobfs/bdev/blobfs_bdev.o 00:04:04.629 CC module/bdev/gpt/gpt.o 00:04:04.629 CC module/bdev/gpt/vbdev_gpt.o 00:04:04.629 LIB libspdk_scheduler_dpdk_governor.a 00:04:04.629 LIB libspdk_scheduler_gscheduler.a 00:04:04.629 SO libspdk_scheduler_gscheduler.so.4.0 00:04:04.629 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:04.629 LIB libspdk_sock_posix.a 00:04:04.629 LIB libspdk_keyring_linux.a 00:04:04.629 SO libspdk_keyring_linux.so.1.0 00:04:04.629 SO libspdk_sock_posix.so.6.0 00:04:04.629 SYMLINK libspdk_scheduler_gscheduler.so 00:04:04.629 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:04.629 LIB libspdk_fsdev_aio.a 00:04:04.629 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:04.629 CC module/bdev/error/vbdev_error_rpc.o 00:04:04.629 SO libspdk_fsdev_aio.so.1.0 00:04:04.629 SYMLINK libspdk_keyring_linux.so 00:04:04.629 SYMLINK libspdk_sock_posix.so 00:04:04.629 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:04.886 SYMLINK libspdk_fsdev_aio.so 00:04:04.886 LIB libspdk_bdev_error.a 00:04:04.887 SO libspdk_bdev_error.so.6.0 00:04:04.887 LIB libspdk_blobfs_bdev.a 00:04:04.887 CC module/bdev/lvol/vbdev_lvol.o 00:04:04.887 CC module/bdev/null/bdev_null.o 00:04:04.887 CC module/bdev/malloc/bdev_malloc.o 00:04:04.887 SO libspdk_blobfs_bdev.so.6.0 00:04:04.887 LIB libspdk_bdev_gpt.a 00:04:04.887 SO libspdk_bdev_gpt.so.6.0 00:04:04.887 SYMLINK libspdk_bdev_error.so 00:04:04.887 SYMLINK libspdk_blobfs_bdev.so 00:04:04.887 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:04.887 CC module/bdev/nvme/bdev_nvme.o 00:04:04.887 LIB libspdk_bdev_delay.a 00:04:04.887 CC module/bdev/passthru/vbdev_passthru.o 00:04:04.887 SO libspdk_bdev_delay.so.6.0 00:04:04.887 SYMLINK libspdk_bdev_gpt.so 00:04:04.887 SYMLINK libspdk_bdev_delay.so 00:04:04.887 CC module/bdev/raid/bdev_raid.o 00:04:04.887 CC module/bdev/split/vbdev_split.o 00:04:05.144 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:05.144 CC module/bdev/null/bdev_null_rpc.o 00:04:05.144 CC module/bdev/xnvme/bdev_xnvme.o 00:04:05.144 CC module/bdev/aio/bdev_aio.o 00:04:05.144 LIB libspdk_bdev_malloc.a 00:04:05.144 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:05.144 CC module/bdev/split/vbdev_split_rpc.o 00:04:05.144 SO libspdk_bdev_malloc.so.6.0 00:04:05.144 LIB libspdk_bdev_null.a 00:04:05.402 SO libspdk_bdev_null.so.6.0 00:04:05.402 SYMLINK libspdk_bdev_malloc.so 00:04:05.402 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:05.402 SYMLINK libspdk_bdev_null.so 00:04:05.402 LIB libspdk_bdev_passthru.a 00:04:05.402 LIB libspdk_bdev_split.a 00:04:05.402 SO libspdk_bdev_passthru.so.6.0 00:04:05.402 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:05.402 SO libspdk_bdev_split.so.6.0 00:04:05.402 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:05.402 CC module/bdev/ftl/bdev_ftl.o 00:04:05.402 SYMLINK libspdk_bdev_passthru.so 00:04:05.402 SYMLINK libspdk_bdev_split.so 00:04:05.402 CC module/bdev/aio/bdev_aio_rpc.o 00:04:05.402 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:05.402 CC module/bdev/iscsi/bdev_iscsi.o 00:04:05.659 LIB libspdk_bdev_xnvme.a 00:04:05.659 LIB libspdk_bdev_zone_block.a 00:04:05.659 SO libspdk_bdev_xnvme.so.3.0 00:04:05.659 SO libspdk_bdev_zone_block.so.6.0 00:04:05.659 LIB libspdk_bdev_aio.a 00:04:05.659 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:05.659 SO libspdk_bdev_aio.so.6.0 00:04:05.659 SYMLINK libspdk_bdev_xnvme.so 00:04:05.659 SYMLINK libspdk_bdev_zone_block.so 00:04:05.659 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:05.659 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:05.659 LIB libspdk_bdev_lvol.a 00:04:05.659 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:05.659 SYMLINK libspdk_bdev_aio.so 00:04:05.659 CC module/bdev/raid/bdev_raid_rpc.o 00:04:05.659 SO libspdk_bdev_lvol.so.6.0 00:04:05.659 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:05.917 SYMLINK libspdk_bdev_lvol.so 00:04:05.917 CC module/bdev/raid/bdev_raid_sb.o 00:04:05.917 LIB libspdk_bdev_ftl.a 00:04:05.917 CC module/bdev/raid/raid0.o 00:04:05.917 CC module/bdev/raid/raid1.o 00:04:05.917 LIB libspdk_bdev_iscsi.a 00:04:05.917 SO libspdk_bdev_ftl.so.6.0 00:04:05.917 SO libspdk_bdev_iscsi.so.6.0 00:04:05.917 SYMLINK libspdk_bdev_iscsi.so 00:04:05.917 CC module/bdev/raid/concat.o 00:04:05.917 CC module/bdev/nvme/nvme_rpc.o 00:04:05.917 SYMLINK libspdk_bdev_ftl.so 00:04:05.917 CC module/bdev/nvme/bdev_mdns_client.o 00:04:05.917 CC module/bdev/nvme/vbdev_opal.o 00:04:06.175 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:06.175 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:06.175 LIB libspdk_bdev_raid.a 00:04:06.175 LIB libspdk_bdev_virtio.a 00:04:06.175 SO libspdk_bdev_virtio.so.6.0 00:04:06.175 SO libspdk_bdev_raid.so.6.0 00:04:06.175 SYMLINK libspdk_bdev_virtio.so 00:04:06.175 SYMLINK libspdk_bdev_raid.so 00:04:07.113 LIB libspdk_bdev_nvme.a 00:04:07.113 SO libspdk_bdev_nvme.so.7.1 00:04:07.113 SYMLINK libspdk_bdev_nvme.so 00:04:07.678 CC module/event/subsystems/fsdev/fsdev.o 00:04:07.678 CC module/event/subsystems/keyring/keyring.o 00:04:07.678 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:07.678 CC module/event/subsystems/iobuf/iobuf.o 00:04:07.678 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:07.678 CC module/event/subsystems/sock/sock.o 00:04:07.678 CC module/event/subsystems/vmd/vmd.o 00:04:07.678 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:07.678 CC module/event/subsystems/scheduler/scheduler.o 00:04:07.678 LIB libspdk_event_keyring.a 00:04:07.678 LIB libspdk_event_vhost_blk.a 00:04:07.678 LIB libspdk_event_fsdev.a 00:04:07.678 SO libspdk_event_keyring.so.1.0 00:04:07.678 LIB libspdk_event_scheduler.a 00:04:07.678 SO libspdk_event_vhost_blk.so.3.0 00:04:07.678 SO libspdk_event_fsdev.so.1.0 00:04:07.678 LIB libspdk_event_vmd.a 00:04:07.678 LIB libspdk_event_sock.a 00:04:07.678 LIB libspdk_event_iobuf.a 00:04:07.678 SO libspdk_event_scheduler.so.4.0 00:04:07.678 SO libspdk_event_vmd.so.6.0 00:04:07.678 SYMLINK libspdk_event_keyring.so 00:04:07.678 SO libspdk_event_iobuf.so.3.0 00:04:07.678 SO libspdk_event_sock.so.5.0 00:04:07.678 SYMLINK libspdk_event_fsdev.so 00:04:07.678 SYMLINK libspdk_event_vhost_blk.so 00:04:07.678 SYMLINK libspdk_event_scheduler.so 00:04:07.678 SYMLINK libspdk_event_vmd.so 00:04:07.678 SYMLINK libspdk_event_sock.so 00:04:07.678 SYMLINK libspdk_event_iobuf.so 00:04:07.936 CC module/event/subsystems/accel/accel.o 00:04:08.194 LIB libspdk_event_accel.a 00:04:08.194 SO libspdk_event_accel.so.6.0 00:04:08.194 SYMLINK libspdk_event_accel.so 00:04:08.452 CC module/event/subsystems/bdev/bdev.o 00:04:08.452 LIB libspdk_event_bdev.a 00:04:08.452 SO libspdk_event_bdev.so.6.0 00:04:08.709 SYMLINK libspdk_event_bdev.so 00:04:08.709 CC module/event/subsystems/scsi/scsi.o 00:04:08.710 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:08.710 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:08.710 CC module/event/subsystems/nbd/nbd.o 00:04:08.710 CC module/event/subsystems/ublk/ublk.o 00:04:08.967 LIB libspdk_event_scsi.a 00:04:08.967 LIB libspdk_event_nbd.a 00:04:08.967 SO libspdk_event_scsi.so.6.0 00:04:08.967 LIB libspdk_event_ublk.a 00:04:08.967 SO libspdk_event_nbd.so.6.0 00:04:08.967 LIB libspdk_event_nvmf.a 00:04:08.967 SO libspdk_event_ublk.so.3.0 00:04:08.967 SYMLINK libspdk_event_scsi.so 00:04:08.967 SO libspdk_event_nvmf.so.6.0 00:04:08.967 SYMLINK libspdk_event_nbd.so 00:04:08.967 SYMLINK libspdk_event_ublk.so 00:04:08.967 SYMLINK libspdk_event_nvmf.so 00:04:09.225 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:09.225 CC module/event/subsystems/iscsi/iscsi.o 00:04:09.225 LIB libspdk_event_vhost_scsi.a 00:04:09.225 SO libspdk_event_vhost_scsi.so.3.0 00:04:09.225 LIB libspdk_event_iscsi.a 00:04:09.225 SO libspdk_event_iscsi.so.6.0 00:04:09.225 SYMLINK libspdk_event_vhost_scsi.so 00:04:09.225 SYMLINK libspdk_event_iscsi.so 00:04:09.483 SO libspdk.so.6.0 00:04:09.483 SYMLINK libspdk.so 00:04:09.741 CXX app/trace/trace.o 00:04:09.741 TEST_HEADER include/spdk/accel.h 00:04:09.741 TEST_HEADER include/spdk/accel_module.h 00:04:09.741 CC app/trace_record/trace_record.o 00:04:09.741 TEST_HEADER include/spdk/assert.h 00:04:09.741 TEST_HEADER include/spdk/barrier.h 00:04:09.741 TEST_HEADER include/spdk/base64.h 00:04:09.741 TEST_HEADER include/spdk/bdev.h 00:04:09.741 TEST_HEADER include/spdk/bdev_module.h 00:04:09.741 TEST_HEADER include/spdk/bdev_zone.h 00:04:09.741 TEST_HEADER include/spdk/bit_array.h 00:04:09.741 CC test/rpc_client/rpc_client_test.o 00:04:09.741 TEST_HEADER include/spdk/bit_pool.h 00:04:09.741 TEST_HEADER include/spdk/blob_bdev.h 00:04:09.741 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:09.741 TEST_HEADER include/spdk/blobfs.h 00:04:09.741 TEST_HEADER include/spdk/blob.h 00:04:09.741 TEST_HEADER include/spdk/conf.h 00:04:09.741 TEST_HEADER include/spdk/config.h 00:04:09.741 TEST_HEADER include/spdk/cpuset.h 00:04:09.741 TEST_HEADER include/spdk/crc16.h 00:04:09.741 TEST_HEADER include/spdk/crc32.h 00:04:09.741 TEST_HEADER include/spdk/crc64.h 00:04:09.741 TEST_HEADER include/spdk/dif.h 00:04:09.741 TEST_HEADER include/spdk/dma.h 00:04:09.741 TEST_HEADER include/spdk/endian.h 00:04:09.741 TEST_HEADER include/spdk/env_dpdk.h 00:04:09.741 TEST_HEADER include/spdk/env.h 00:04:09.741 TEST_HEADER include/spdk/event.h 00:04:09.741 TEST_HEADER include/spdk/fd_group.h 00:04:09.741 TEST_HEADER include/spdk/fd.h 00:04:09.741 TEST_HEADER include/spdk/file.h 00:04:09.741 TEST_HEADER include/spdk/fsdev.h 00:04:09.741 TEST_HEADER include/spdk/fsdev_module.h 00:04:09.741 TEST_HEADER include/spdk/ftl.h 00:04:09.741 CC examples/util/zipf/zipf.o 00:04:09.741 CC examples/ioat/perf/perf.o 00:04:09.741 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:09.741 TEST_HEADER include/spdk/gpt_spec.h 00:04:09.741 TEST_HEADER include/spdk/hexlify.h 00:04:09.741 TEST_HEADER include/spdk/histogram_data.h 00:04:09.741 CC test/thread/poller_perf/poller_perf.o 00:04:09.741 TEST_HEADER include/spdk/idxd.h 00:04:09.741 TEST_HEADER include/spdk/idxd_spec.h 00:04:09.741 TEST_HEADER include/spdk/init.h 00:04:09.741 TEST_HEADER include/spdk/ioat.h 00:04:09.741 TEST_HEADER include/spdk/ioat_spec.h 00:04:09.741 TEST_HEADER include/spdk/iscsi_spec.h 00:04:09.741 CC test/dma/test_dma/test_dma.o 00:04:09.741 TEST_HEADER include/spdk/json.h 00:04:09.741 TEST_HEADER include/spdk/jsonrpc.h 00:04:09.741 TEST_HEADER include/spdk/keyring.h 00:04:09.741 TEST_HEADER include/spdk/keyring_module.h 00:04:09.741 CC test/app/bdev_svc/bdev_svc.o 00:04:09.741 TEST_HEADER include/spdk/likely.h 00:04:09.741 TEST_HEADER include/spdk/log.h 00:04:09.741 TEST_HEADER include/spdk/lvol.h 00:04:09.741 TEST_HEADER include/spdk/md5.h 00:04:09.741 TEST_HEADER include/spdk/memory.h 00:04:09.741 TEST_HEADER include/spdk/mmio.h 00:04:09.741 TEST_HEADER include/spdk/nbd.h 00:04:09.741 TEST_HEADER include/spdk/net.h 00:04:09.741 TEST_HEADER include/spdk/notify.h 00:04:09.741 TEST_HEADER include/spdk/nvme.h 00:04:09.741 TEST_HEADER include/spdk/nvme_intel.h 00:04:09.741 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:09.741 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:09.741 TEST_HEADER include/spdk/nvme_spec.h 00:04:09.741 TEST_HEADER include/spdk/nvme_zns.h 00:04:09.741 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:09.741 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:09.741 TEST_HEADER include/spdk/nvmf.h 00:04:09.741 TEST_HEADER include/spdk/nvmf_spec.h 00:04:09.741 TEST_HEADER include/spdk/nvmf_transport.h 00:04:09.741 CC test/env/mem_callbacks/mem_callbacks.o 00:04:09.741 TEST_HEADER include/spdk/opal.h 00:04:09.741 TEST_HEADER include/spdk/opal_spec.h 00:04:09.741 TEST_HEADER include/spdk/pci_ids.h 00:04:09.741 TEST_HEADER include/spdk/pipe.h 00:04:09.741 TEST_HEADER include/spdk/queue.h 00:04:09.741 TEST_HEADER include/spdk/reduce.h 00:04:09.741 TEST_HEADER include/spdk/rpc.h 00:04:09.741 TEST_HEADER include/spdk/scheduler.h 00:04:09.741 TEST_HEADER include/spdk/scsi.h 00:04:09.741 LINK rpc_client_test 00:04:09.741 TEST_HEADER include/spdk/scsi_spec.h 00:04:09.741 TEST_HEADER include/spdk/sock.h 00:04:09.741 TEST_HEADER include/spdk/stdinc.h 00:04:09.741 TEST_HEADER include/spdk/string.h 00:04:09.741 TEST_HEADER include/spdk/thread.h 00:04:09.741 TEST_HEADER include/spdk/trace.h 00:04:09.741 TEST_HEADER include/spdk/trace_parser.h 00:04:09.741 TEST_HEADER include/spdk/tree.h 00:04:09.741 TEST_HEADER include/spdk/ublk.h 00:04:09.741 TEST_HEADER include/spdk/util.h 00:04:09.741 TEST_HEADER include/spdk/uuid.h 00:04:09.741 LINK poller_perf 00:04:09.741 TEST_HEADER include/spdk/version.h 00:04:09.741 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:09.741 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:09.741 TEST_HEADER include/spdk/vhost.h 00:04:09.741 TEST_HEADER include/spdk/vmd.h 00:04:09.741 TEST_HEADER include/spdk/xor.h 00:04:09.741 TEST_HEADER include/spdk/zipf.h 00:04:09.741 CXX test/cpp_headers/accel.o 00:04:09.741 LINK spdk_trace_record 00:04:09.741 LINK zipf 00:04:09.741 LINK bdev_svc 00:04:09.999 LINK ioat_perf 00:04:09.999 CXX test/cpp_headers/accel_module.o 00:04:09.999 CXX test/cpp_headers/assert.o 00:04:09.999 LINK spdk_trace 00:04:09.999 CC examples/ioat/verify/verify.o 00:04:09.999 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.999 CXX test/cpp_headers/barrier.o 00:04:09.999 CXX test/cpp_headers/base64.o 00:04:09.999 LINK test_dma 00:04:09.999 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:09.999 CC test/app/histogram_perf/histogram_perf.o 00:04:10.270 CC app/nvmf_tgt/nvmf_main.o 00:04:10.270 CC test/event/event_perf/event_perf.o 00:04:10.270 LINK verify 00:04:10.270 LINK interrupt_tgt 00:04:10.270 CXX test/cpp_headers/bdev.o 00:04:10.270 LINK mem_callbacks 00:04:10.270 CXX test/cpp_headers/bdev_module.o 00:04:10.270 LINK histogram_perf 00:04:10.270 CXX test/cpp_headers/bdev_zone.o 00:04:10.270 LINK nvmf_tgt 00:04:10.270 LINK event_perf 00:04:10.270 CC app/iscsi_tgt/iscsi_tgt.o 00:04:10.539 CC test/env/vtophys/vtophys.o 00:04:10.539 CC test/event/reactor/reactor.o 00:04:10.539 LINK iscsi_tgt 00:04:10.539 LINK nvme_fuzz 00:04:10.539 CXX test/cpp_headers/bit_array.o 00:04:10.539 CC examples/sock/hello_world/hello_sock.o 00:04:10.539 CC examples/vmd/lsvmd/lsvmd.o 00:04:10.539 CC test/app/jsoncat/jsoncat.o 00:04:10.539 CC examples/thread/thread/thread_ex.o 00:04:10.539 LINK reactor 00:04:10.539 CC examples/idxd/perf/perf.o 00:04:10.539 LINK vtophys 00:04:10.539 CXX test/cpp_headers/bit_pool.o 00:04:10.539 LINK jsoncat 00:04:10.539 LINK lsvmd 00:04:10.539 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:10.539 LINK hello_sock 00:04:10.796 CC test/event/reactor_perf/reactor_perf.o 00:04:10.796 LINK thread 00:04:10.796 CC app/spdk_tgt/spdk_tgt.o 00:04:10.796 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:10.796 CXX test/cpp_headers/blob_bdev.o 00:04:10.796 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:10.796 LINK reactor_perf 00:04:10.796 CC examples/vmd/led/led.o 00:04:10.796 LINK env_dpdk_post_init 00:04:10.796 LINK spdk_tgt 00:04:10.796 LINK idxd_perf 00:04:10.796 CXX test/cpp_headers/blobfs_bdev.o 00:04:10.796 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:10.796 CC test/accel/dif/dif.o 00:04:11.053 LINK led 00:04:11.053 CXX test/cpp_headers/blobfs.o 00:04:11.053 CC test/event/app_repeat/app_repeat.o 00:04:11.053 CC test/blobfs/mkfs/mkfs.o 00:04:11.053 CC test/env/memory/memory_ut.o 00:04:11.053 CC test/env/pci/pci_ut.o 00:04:11.053 CC app/spdk_lspci/spdk_lspci.o 00:04:11.053 CXX test/cpp_headers/blob.o 00:04:11.053 LINK app_repeat 00:04:11.053 CC examples/nvme/hello_world/hello_world.o 00:04:11.309 LINK spdk_lspci 00:04:11.309 LINK mkfs 00:04:11.309 CXX test/cpp_headers/conf.o 00:04:11.309 CC test/event/scheduler/scheduler.o 00:04:11.309 LINK vhost_fuzz 00:04:11.309 CXX test/cpp_headers/config.o 00:04:11.309 CC app/spdk_nvme_perf/perf.o 00:04:11.309 CXX test/cpp_headers/cpuset.o 00:04:11.309 LINK hello_world 00:04:11.309 CC examples/nvme/reconnect/reconnect.o 00:04:11.566 LINK pci_ut 00:04:11.566 LINK dif 00:04:11.566 CXX test/cpp_headers/crc16.o 00:04:11.566 LINK scheduler 00:04:11.566 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:11.566 CC test/lvol/esnap/esnap.o 00:04:11.566 CXX test/cpp_headers/crc32.o 00:04:11.822 CC examples/nvme/arbitration/arbitration.o 00:04:11.822 CC test/nvme/aer/aer.o 00:04:11.822 CC examples/nvme/hotplug/hotplug.o 00:04:11.822 CXX test/cpp_headers/crc64.o 00:04:11.822 LINK reconnect 00:04:11.822 CXX test/cpp_headers/dif.o 00:04:11.822 LINK memory_ut 00:04:11.822 LINK hotplug 00:04:11.822 CC test/nvme/reset/reset.o 00:04:11.822 LINK arbitration 00:04:12.079 LINK aer 00:04:12.079 LINK iscsi_fuzz 00:04:12.079 CXX test/cpp_headers/dma.o 00:04:12.079 CC test/nvme/sgl/sgl.o 00:04:12.079 CC test/nvme/e2edp/nvme_dp.o 00:04:12.079 CXX test/cpp_headers/endian.o 00:04:12.079 LINK nvme_manage 00:04:12.079 LINK spdk_nvme_perf 00:04:12.079 CC test/nvme/overhead/overhead.o 00:04:12.079 LINK reset 00:04:12.079 CXX test/cpp_headers/env_dpdk.o 00:04:12.336 CC test/app/stub/stub.o 00:04:12.336 CC test/nvme/err_injection/err_injection.o 00:04:12.336 CXX test/cpp_headers/env.o 00:04:12.336 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:12.336 LINK nvme_dp 00:04:12.336 CC app/spdk_nvme_identify/identify.o 00:04:12.336 LINK sgl 00:04:12.336 LINK overhead 00:04:12.336 LINK err_injection 00:04:12.336 LINK stub 00:04:12.336 CXX test/cpp_headers/event.o 00:04:12.336 LINK cmb_copy 00:04:12.336 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:12.592 CC test/nvme/startup/startup.o 00:04:12.592 CC examples/nvme/abort/abort.o 00:04:12.592 CXX test/cpp_headers/fd_group.o 00:04:12.592 CXX test/cpp_headers/fd.o 00:04:12.592 CC app/spdk_nvme_discover/discovery_aer.o 00:04:12.592 CC app/spdk_top/spdk_top.o 00:04:12.592 CC test/bdev/bdevio/bdevio.o 00:04:12.592 LINK startup 00:04:12.592 CXX test/cpp_headers/file.o 00:04:12.863 LINK hello_fsdev 00:04:12.863 LINK spdk_nvme_discover 00:04:12.863 CC app/vhost/vhost.o 00:04:12.863 LINK abort 00:04:12.863 CXX test/cpp_headers/fsdev.o 00:04:12.863 CC test/nvme/reserve/reserve.o 00:04:12.863 CXX test/cpp_headers/fsdev_module.o 00:04:12.863 LINK spdk_nvme_identify 00:04:12.863 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.863 CXX test/cpp_headers/ftl.o 00:04:12.863 LINK vhost 00:04:13.167 CC test/nvme/simple_copy/simple_copy.o 00:04:13.167 LINK bdevio 00:04:13.167 CXX test/cpp_headers/fuse_dispatcher.o 00:04:13.167 LINK reserve 00:04:13.167 CXX test/cpp_headers/gpt_spec.o 00:04:13.167 CXX test/cpp_headers/hexlify.o 00:04:13.167 LINK pmr_persistence 00:04:13.167 CXX test/cpp_headers/histogram_data.o 00:04:13.167 CXX test/cpp_headers/idxd.o 00:04:13.167 CXX test/cpp_headers/idxd_spec.o 00:04:13.167 CXX test/cpp_headers/init.o 00:04:13.167 LINK simple_copy 00:04:13.167 CXX test/cpp_headers/ioat.o 00:04:13.167 CXX test/cpp_headers/ioat_spec.o 00:04:13.167 CC examples/accel/perf/accel_perf.o 00:04:13.435 CXX test/cpp_headers/iscsi_spec.o 00:04:13.436 CXX test/cpp_headers/json.o 00:04:13.436 CXX test/cpp_headers/jsonrpc.o 00:04:13.436 CC test/nvme/connect_stress/connect_stress.o 00:04:13.436 CC app/spdk_dd/spdk_dd.o 00:04:13.436 CXX test/cpp_headers/keyring.o 00:04:13.436 CC examples/blob/hello_world/hello_blob.o 00:04:13.436 CXX test/cpp_headers/keyring_module.o 00:04:13.436 CXX test/cpp_headers/likely.o 00:04:13.436 LINK spdk_top 00:04:13.436 CXX test/cpp_headers/log.o 00:04:13.436 LINK connect_stress 00:04:13.697 CXX test/cpp_headers/lvol.o 00:04:13.697 CC app/fio/nvme/fio_plugin.o 00:04:13.697 CXX test/cpp_headers/md5.o 00:04:13.697 LINK hello_blob 00:04:13.697 LINK accel_perf 00:04:13.697 CC examples/blob/cli/blobcli.o 00:04:13.697 CC test/nvme/boot_partition/boot_partition.o 00:04:13.697 CC test/nvme/compliance/nvme_compliance.o 00:04:13.697 LINK spdk_dd 00:04:13.697 CC test/nvme/fused_ordering/fused_ordering.o 00:04:13.697 CXX test/cpp_headers/memory.o 00:04:13.698 LINK boot_partition 00:04:13.698 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:13.955 CC test/nvme/fdp/fdp.o 00:04:13.955 CXX test/cpp_headers/mmio.o 00:04:13.955 CXX test/cpp_headers/nbd.o 00:04:13.955 CC test/nvme/cuse/cuse.o 00:04:13.955 LINK fused_ordering 00:04:13.955 LINK doorbell_aers 00:04:13.955 CXX test/cpp_headers/net.o 00:04:13.955 LINK blobcli 00:04:13.955 LINK nvme_compliance 00:04:13.955 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.213 CXX test/cpp_headers/notify.o 00:04:14.213 CXX test/cpp_headers/nvme.o 00:04:14.213 LINK spdk_nvme 00:04:14.213 LINK fdp 00:04:14.213 CXX test/cpp_headers/nvme_intel.o 00:04:14.213 CXX test/cpp_headers/nvme_ocssd.o 00:04:14.213 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:14.213 CXX test/cpp_headers/nvme_spec.o 00:04:14.213 CXX test/cpp_headers/nvme_zns.o 00:04:14.213 CXX test/cpp_headers/nvmf_cmd.o 00:04:14.213 LINK hello_bdev 00:04:14.213 CC app/fio/bdev/fio_plugin.o 00:04:14.213 CC examples/bdev/bdevperf/bdevperf.o 00:04:14.472 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:14.472 CXX test/cpp_headers/nvmf.o 00:04:14.472 CXX test/cpp_headers/nvmf_spec.o 00:04:14.472 CXX test/cpp_headers/nvmf_transport.o 00:04:14.472 CXX test/cpp_headers/opal.o 00:04:14.472 CXX test/cpp_headers/opal_spec.o 00:04:14.472 CXX test/cpp_headers/pci_ids.o 00:04:14.472 CXX test/cpp_headers/pipe.o 00:04:14.472 CXX test/cpp_headers/queue.o 00:04:14.472 CXX test/cpp_headers/reduce.o 00:04:14.472 CXX test/cpp_headers/rpc.o 00:04:14.472 CXX test/cpp_headers/scheduler.o 00:04:14.472 CXX test/cpp_headers/scsi.o 00:04:14.730 CXX test/cpp_headers/scsi_spec.o 00:04:14.730 LINK spdk_bdev 00:04:14.730 CXX test/cpp_headers/sock.o 00:04:14.730 CXX test/cpp_headers/stdinc.o 00:04:14.730 CXX test/cpp_headers/string.o 00:04:14.730 CXX test/cpp_headers/thread.o 00:04:14.730 CXX test/cpp_headers/trace.o 00:04:14.730 CXX test/cpp_headers/trace_parser.o 00:04:14.730 CXX test/cpp_headers/tree.o 00:04:14.730 CXX test/cpp_headers/ublk.o 00:04:14.730 CXX test/cpp_headers/util.o 00:04:14.730 CXX test/cpp_headers/uuid.o 00:04:14.730 CXX test/cpp_headers/version.o 00:04:14.730 CXX test/cpp_headers/vfio_user_pci.o 00:04:14.730 CXX test/cpp_headers/vfio_user_spec.o 00:04:14.730 CXX test/cpp_headers/vhost.o 00:04:14.730 CXX test/cpp_headers/vmd.o 00:04:14.988 CXX test/cpp_headers/xor.o 00:04:14.988 CXX test/cpp_headers/zipf.o 00:04:14.988 LINK cuse 00:04:14.988 LINK bdevperf 00:04:15.554 CC examples/nvmf/nvmf/nvmf.o 00:04:15.554 LINK nvmf 00:04:16.488 LINK esnap 00:04:16.488 00:04:16.488 real 1m1.584s 00:04:16.488 user 5m6.417s 00:04:16.488 sys 0m50.197s 00:04:16.488 23:17:02 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:16.488 23:17:02 make -- common/autotest_common.sh@10 -- $ set +x 00:04:16.488 ************************************ 00:04:16.488 END TEST make 00:04:16.488 ************************************ 00:04:16.488 23:17:02 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:16.488 23:17:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:16.488 23:17:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:16.488 23:17:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.488 23:17:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:16.488 23:17:02 -- pm/common@44 -- $ pid=5805 00:04:16.488 23:17:02 -- pm/common@50 -- $ kill -TERM 5805 00:04:16.488 23:17:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.488 23:17:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:16.488 23:17:02 -- pm/common@44 -- $ pid=5806 00:04:16.488 23:17:02 -- pm/common@50 -- $ kill -TERM 5806 00:04:16.488 23:17:02 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:16.488 23:17:02 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:16.488 23:17:02 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.488 23:17:02 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.488 23:17:02 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.747 23:17:02 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.747 23:17:02 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.747 23:17:02 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.747 23:17:02 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.747 23:17:02 -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.747 23:17:02 -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.747 23:17:02 -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.747 23:17:02 -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.747 23:17:02 -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.747 23:17:02 -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.747 23:17:02 -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.747 23:17:02 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.747 23:17:02 -- scripts/common.sh@344 -- # case "$op" in 00:04:16.747 23:17:02 -- scripts/common.sh@345 -- # : 1 00:04:16.747 23:17:02 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.747 23:17:02 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.747 23:17:02 -- scripts/common.sh@365 -- # decimal 1 00:04:16.747 23:17:02 -- scripts/common.sh@353 -- # local d=1 00:04:16.747 23:17:02 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.747 23:17:02 -- scripts/common.sh@355 -- # echo 1 00:04:16.747 23:17:02 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.747 23:17:02 -- scripts/common.sh@366 -- # decimal 2 00:04:16.747 23:17:02 -- scripts/common.sh@353 -- # local d=2 00:04:16.747 23:17:02 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.747 23:17:02 -- scripts/common.sh@355 -- # echo 2 00:04:16.747 23:17:02 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.747 23:17:02 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.747 23:17:02 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.747 23:17:02 -- scripts/common.sh@368 -- # return 0 00:04:16.747 23:17:02 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.747 23:17:02 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.747 --rc genhtml_branch_coverage=1 00:04:16.747 --rc genhtml_function_coverage=1 00:04:16.747 --rc genhtml_legend=1 00:04:16.747 --rc geninfo_all_blocks=1 00:04:16.747 --rc geninfo_unexecuted_blocks=1 00:04:16.747 00:04:16.747 ' 00:04:16.747 23:17:02 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.747 --rc genhtml_branch_coverage=1 00:04:16.747 --rc genhtml_function_coverage=1 00:04:16.747 --rc genhtml_legend=1 00:04:16.747 --rc geninfo_all_blocks=1 00:04:16.747 --rc geninfo_unexecuted_blocks=1 00:04:16.747 00:04:16.747 ' 00:04:16.747 23:17:02 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.747 --rc genhtml_branch_coverage=1 00:04:16.747 --rc genhtml_function_coverage=1 00:04:16.747 --rc genhtml_legend=1 00:04:16.747 --rc geninfo_all_blocks=1 00:04:16.747 --rc geninfo_unexecuted_blocks=1 00:04:16.747 00:04:16.747 ' 00:04:16.747 23:17:02 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.747 --rc genhtml_branch_coverage=1 00:04:16.747 --rc genhtml_function_coverage=1 00:04:16.747 --rc genhtml_legend=1 00:04:16.747 --rc geninfo_all_blocks=1 00:04:16.747 --rc geninfo_unexecuted_blocks=1 00:04:16.747 00:04:16.747 ' 00:04:16.747 23:17:02 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:16.747 23:17:02 -- nvmf/common.sh@7 -- # uname -s 00:04:16.747 23:17:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.747 23:17:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.747 23:17:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.747 23:17:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.748 23:17:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.748 23:17:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.748 23:17:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.748 23:17:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.748 23:17:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.748 23:17:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.748 23:17:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b93f5f7c-f437-4b52-82ca-ba312a95313a 00:04:16.748 23:17:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=b93f5f7c-f437-4b52-82ca-ba312a95313a 00:04:16.748 23:17:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.748 23:17:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.748 23:17:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.748 23:17:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.748 23:17:02 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:16.748 23:17:02 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.748 23:17:02 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.748 23:17:02 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.748 23:17:02 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.748 23:17:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.748 23:17:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.748 23:17:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.748 23:17:02 -- paths/export.sh@5 -- # export PATH 00:04:16.748 23:17:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.748 23:17:02 -- nvmf/common.sh@51 -- # : 0 00:04:16.748 23:17:02 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.748 23:17:02 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.748 23:17:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.748 23:17:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.748 23:17:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.748 23:17:02 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.748 23:17:02 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.748 23:17:02 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.748 23:17:02 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.748 23:17:02 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:16.748 23:17:02 -- spdk/autotest.sh@32 -- # uname -s 00:04:16.748 23:17:02 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:16.748 23:17:02 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:16.748 23:17:02 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:16.748 23:17:02 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:16.748 23:17:02 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:16.748 23:17:02 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:16.748 23:17:02 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:16.748 23:17:02 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:16.748 23:17:02 -- spdk/autotest.sh@48 -- # udevadm_pid=66555 00:04:16.748 23:17:02 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:16.748 23:17:02 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:16.748 23:17:02 -- pm/common@17 -- # local monitor 00:04:16.748 23:17:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.748 23:17:02 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:16.748 23:17:02 -- pm/common@21 -- # date +%s 00:04:16.748 23:17:02 -- pm/common@25 -- # sleep 1 00:04:16.748 23:17:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732058222 00:04:16.748 23:17:02 -- pm/common@21 -- # date +%s 00:04:16.748 23:17:02 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732058222 00:04:16.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732058222_collect-cpu-load.pm.log 00:04:16.748 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732058222_collect-vmstat.pm.log 00:04:17.682 23:17:03 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:17.682 23:17:03 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:17.682 23:17:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.682 23:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:17.682 23:17:03 -- spdk/autotest.sh@59 -- # create_test_list 00:04:17.682 23:17:03 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:17.682 23:17:03 -- common/autotest_common.sh@10 -- # set +x 00:04:17.682 23:17:03 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:17.682 23:17:03 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:17.682 23:17:03 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:17.682 23:17:03 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:17.682 23:17:03 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:17.682 23:17:03 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:17.682 23:17:03 -- common/autotest_common.sh@1457 -- # uname 00:04:17.682 23:17:03 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:17.682 23:17:03 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:17.682 23:17:03 -- common/autotest_common.sh@1477 -- # uname 00:04:17.682 23:17:03 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:17.682 23:17:03 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:17.682 23:17:03 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:17.941 lcov: LCOV version 1.15 00:04:17.941 23:17:03 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:32.813 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:32.813 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:47.692 23:17:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:47.692 23:17:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.692 23:17:31 -- common/autotest_common.sh@10 -- # set +x 00:04:47.692 23:17:31 -- spdk/autotest.sh@78 -- # rm -f 00:04:47.692 23:17:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:47.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.692 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:47.692 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:47.692 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:47.692 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:47.692 23:17:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:47.692 23:17:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:47.692 23:17:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:47.692 23:17:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:47.692 23:17:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:47.692 23:17:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:47.692 23:17:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:47.692 23:17:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:47.692 23:17:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.692 23:17:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.692 23:17:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:47.692 23:17:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:47.692 23:17:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:47.692 No valid GPT data, bailing 00:04:47.692 23:17:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:47.692 23:17:32 -- scripts/common.sh@394 -- # pt= 00:04:47.692 23:17:32 -- scripts/common.sh@395 -- # return 1 00:04:47.692 23:17:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:47.692 1+0 records in 00:04:47.692 1+0 records out 00:04:47.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00971018 s, 108 MB/s 00:04:47.692 23:17:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.692 23:17:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.692 23:17:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:47.692 23:17:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:47.693 23:17:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:47.693 No valid GPT data, bailing 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # pt= 00:04:47.693 23:17:32 -- scripts/common.sh@395 -- # return 1 00:04:47.693 23:17:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:47.693 1+0 records in 00:04:47.693 1+0 records out 00:04:47.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00287458 s, 365 MB/s 00:04:47.693 23:17:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.693 23:17:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.693 23:17:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:47.693 23:17:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:47.693 23:17:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:47.693 No valid GPT data, bailing 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # pt= 00:04:47.693 23:17:32 -- scripts/common.sh@395 -- # return 1 00:04:47.693 23:17:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:47.693 1+0 records in 00:04:47.693 1+0 records out 00:04:47.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043394 s, 242 MB/s 00:04:47.693 23:17:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.693 23:17:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.693 23:17:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:47.693 23:17:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:47.693 23:17:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:47.693 No valid GPT data, bailing 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # pt= 00:04:47.693 23:17:32 -- scripts/common.sh@395 -- # return 1 00:04:47.693 23:17:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:47.693 1+0 records in 00:04:47.693 1+0 records out 00:04:47.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453139 s, 231 MB/s 00:04:47.693 23:17:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.693 23:17:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.693 23:17:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:47.693 23:17:32 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:47.693 23:17:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:47.693 No valid GPT data, bailing 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # pt= 00:04:47.693 23:17:32 -- scripts/common.sh@395 -- # return 1 00:04:47.693 23:17:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:47.693 1+0 records in 00:04:47.693 1+0 records out 00:04:47.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394532 s, 266 MB/s 00:04:47.693 23:17:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:47.693 23:17:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:47.693 23:17:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:47.693 23:17:32 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:47.693 23:17:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:47.693 No valid GPT data, bailing 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:47.693 23:17:32 -- scripts/common.sh@394 -- # pt= 00:04:47.693 23:17:32 -- scripts/common.sh@395 -- # return 1 00:04:47.693 23:17:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:47.693 1+0 records in 00:04:47.693 1+0 records out 00:04:47.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0057651 s, 182 MB/s 00:04:47.693 23:17:32 -- spdk/autotest.sh@105 -- # sync 00:04:47.693 23:17:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:47.693 23:17:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:47.693 23:17:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:48.693 23:17:34 -- spdk/autotest.sh@111 -- # uname -s 00:04:48.693 23:17:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:48.693 23:17:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:48.693 23:17:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:48.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.236 Hugepages 00:04:49.236 node hugesize free / total 00:04:49.496 node0 1048576kB 0 / 0 00:04:49.496 node0 2048kB 0 / 0 00:04:49.496 00:04:49.496 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.496 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:49.496 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:49.496 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:49.496 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:49.757 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:49.757 23:17:35 -- spdk/autotest.sh@117 -- # uname -s 00:04:49.757 23:17:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:49.757 23:17:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:49.757 23:17:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:50.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.589 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.589 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.589 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.589 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:50.850 23:17:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:51.791 23:17:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:51.791 23:17:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:51.791 23:17:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:51.791 23:17:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:51.791 23:17:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:51.791 23:17:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:51.791 23:17:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:51.791 23:17:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:51.791 23:17:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:51.791 23:17:37 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:51.791 23:17:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:51.791 23:17:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.312 Waiting for block devices as requested 00:04:52.312 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.312 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.572 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:52.572 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:57.865 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:57.865 23:17:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.865 23:17:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:57.865 23:17:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.865 23:17:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:57.865 23:17:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.865 23:17:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:57.865 23:17:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:57.865 23:17:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:57.865 23:17:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:57.865 23:17:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:57.865 23:17:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.865 23:17:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:57.865 23:17:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.865 23:17:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.865 23:17:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.865 23:17:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.865 23:17:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.865 23:17:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:57.865 23:17:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.865 23:17:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.865 23:17:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.865 23:17:43 -- common/autotest_common.sh@1543 -- # continue 00:04:57.865 23:17:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.865 23:17:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:57.865 23:17:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.865 23:17:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:57.865 23:17:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:57.866 23:17:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.866 23:17:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.866 23:17:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.866 23:17:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.866 23:17:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.866 23:17:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.866 23:17:43 -- common/autotest_common.sh@1543 -- # continue 00:04:57.866 23:17:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.866 23:17:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:57.866 23:17:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.866 23:17:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:57.866 23:17:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:57.866 23:17:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.866 23:17:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.866 23:17:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.866 23:17:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.867 23:17:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.867 23:17:43 -- common/autotest_common.sh@1543 -- # continue 00:04:57.867 23:17:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:57.867 23:17:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:57.867 23:17:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:57.867 23:17:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:57.867 23:17:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:57.867 23:17:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:57.867 23:17:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:57.867 23:17:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:57.867 23:17:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:57.867 23:17:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:57.867 23:17:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:57.867 23:17:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:57.867 23:17:43 -- common/autotest_common.sh@1543 -- # continue 00:04:57.867 23:17:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:57.867 23:17:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.867 23:17:43 -- common/autotest_common.sh@10 -- # set +x 00:04:57.867 23:17:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:57.867 23:17:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.867 23:17:43 -- common/autotest_common.sh@10 -- # set +x 00:04:57.867 23:17:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.700 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.700 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.700 23:17:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:58.700 23:17:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.700 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.700 23:17:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:58.700 23:17:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:58.700 23:17:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.700 23:17:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:58.700 23:17:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:58.700 23:17:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:58.700 23:17:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:58.700 23:17:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:58.700 23:17:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.700 23:17:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.700 23:17:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.700 23:17:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.700 23:17:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.700 23:17:44 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:58.700 23:17:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:58.700 23:17:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.700 23:17:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.700 23:17:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.700 23:17:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.700 23:17:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.700 23:17:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.700 23:17:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:58.700 23:17:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.700 23:17:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.700 23:17:44 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:58.700 23:17:44 -- common/autotest_common.sh@1572 -- # return 0 00:04:58.700 23:17:44 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:58.700 23:17:44 -- common/autotest_common.sh@1580 -- # return 0 00:04:58.700 23:17:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.700 23:17:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.700 23:17:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.700 23:17:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.700 23:17:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.700 23:17:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.700 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.962 23:17:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.962 23:17:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.962 23:17:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.962 23:17:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.962 23:17:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.962 ************************************ 00:04:58.962 START TEST env 00:04:58.962 ************************************ 00:04:58.962 23:17:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.962 * Looking for test storage... 00:04:58.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:58.962 23:17:44 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.962 23:17:44 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.962 23:17:44 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.962 23:17:45 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.962 23:17:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.962 23:17:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.962 23:17:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.962 23:17:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.962 23:17:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.962 23:17:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.962 23:17:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.962 23:17:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.962 23:17:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.962 23:17:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.962 23:17:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.962 23:17:45 env -- scripts/common.sh@344 -- # case "$op" in 00:04:58.962 23:17:45 env -- scripts/common.sh@345 -- # : 1 00:04:58.962 23:17:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.962 23:17:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.963 23:17:45 env -- scripts/common.sh@365 -- # decimal 1 00:04:58.963 23:17:45 env -- scripts/common.sh@353 -- # local d=1 00:04:58.963 23:17:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.963 23:17:45 env -- scripts/common.sh@355 -- # echo 1 00:04:58.963 23:17:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.963 23:17:45 env -- scripts/common.sh@366 -- # decimal 2 00:04:58.963 23:17:45 env -- scripts/common.sh@353 -- # local d=2 00:04:58.963 23:17:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.963 23:17:45 env -- scripts/common.sh@355 -- # echo 2 00:04:58.963 23:17:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.963 23:17:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.963 23:17:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.963 23:17:45 env -- scripts/common.sh@368 -- # return 0 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.963 --rc genhtml_branch_coverage=1 00:04:58.963 --rc genhtml_function_coverage=1 00:04:58.963 --rc genhtml_legend=1 00:04:58.963 --rc geninfo_all_blocks=1 00:04:58.963 --rc geninfo_unexecuted_blocks=1 00:04:58.963 00:04:58.963 ' 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.963 --rc genhtml_branch_coverage=1 00:04:58.963 --rc genhtml_function_coverage=1 00:04:58.963 --rc genhtml_legend=1 00:04:58.963 --rc geninfo_all_blocks=1 00:04:58.963 --rc geninfo_unexecuted_blocks=1 00:04:58.963 00:04:58.963 ' 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.963 --rc genhtml_branch_coverage=1 00:04:58.963 --rc genhtml_function_coverage=1 00:04:58.963 --rc genhtml_legend=1 00:04:58.963 --rc geninfo_all_blocks=1 00:04:58.963 --rc geninfo_unexecuted_blocks=1 00:04:58.963 00:04:58.963 ' 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.963 --rc genhtml_branch_coverage=1 00:04:58.963 --rc genhtml_function_coverage=1 00:04:58.963 --rc genhtml_legend=1 00:04:58.963 --rc geninfo_all_blocks=1 00:04:58.963 --rc geninfo_unexecuted_blocks=1 00:04:58.963 00:04:58.963 ' 00:04:58.963 23:17:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.963 23:17:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.963 23:17:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.963 ************************************ 00:04:58.963 START TEST env_memory 00:04:58.963 ************************************ 00:04:58.963 23:17:45 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.963 00:04:58.963 00:04:58.963 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.963 http://cunit.sourceforge.net/ 00:04:58.963 00:04:58.963 00:04:58.963 Suite: memory 00:04:58.963 Test: alloc and free memory map ...[2024-11-19 23:17:45.120412] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.963 passed 00:04:59.224 Test: mem map translation ...[2024-11-19 23:17:45.159220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.224 [2024-11-19 23:17:45.159261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.224 [2024-11-19 23:17:45.159320] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.224 [2024-11-19 23:17:45.159336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.224 passed 00:04:59.224 Test: mem map registration ...[2024-11-19 23:17:45.227405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:59.224 [2024-11-19 23:17:45.227457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:59.224 passed 00:04:59.224 Test: mem map adjacent registrations ...passed 00:04:59.224 00:04:59.224 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.224 suites 1 1 n/a 0 0 00:04:59.224 tests 4 4 4 0 0 00:04:59.224 asserts 152 152 152 0 n/a 00:04:59.224 00:04:59.224 Elapsed time = 0.233 seconds 00:04:59.224 00:04:59.224 real 0m0.262s 00:04:59.224 user 0m0.238s 00:04:59.224 sys 0m0.017s 00:04:59.224 23:17:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.224 ************************************ 00:04:59.224 END TEST env_memory 00:04:59.224 ************************************ 00:04:59.224 23:17:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.224 23:17:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.224 23:17:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.224 23:17:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.224 23:17:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.224 ************************************ 00:04:59.224 START TEST env_vtophys 00:04:59.224 ************************************ 00:04:59.224 23:17:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.224 EAL: lib.eal log level changed from notice to debug 00:04:59.224 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.224 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.485 EAL: Maximum logical cores by configuration: 128 00:04:59.485 EAL: Detected CPU lcores: 10 00:04:59.485 EAL: Detected NUMA nodes: 1 00:04:59.485 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:59.485 EAL: Detected shared linkage of DPDK 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:59.485 EAL: Registered [vdev] bus. 00:04:59.485 EAL: bus.vdev log level changed from disabled to notice 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:59.485 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:59.485 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:59.485 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:59.485 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.485 EAL: No shared files mode enabled, IPC is disabled 00:04:59.485 EAL: Selected IOVA mode 'PA' 00:04:59.485 EAL: Probing VFIO support... 00:04:59.485 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.485 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.485 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.485 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.485 EAL: Setting up physically contiguous memory... 00:04:59.485 EAL: Setting maximum number of open files to 524288 00:04:59.485 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.485 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.485 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.485 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.485 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.485 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.485 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.485 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.486 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.486 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.486 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.486 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.486 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.486 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.486 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.486 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.486 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.486 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.486 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.486 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.486 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.486 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.486 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.486 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.486 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.486 EAL: Hugepages will be freed exactly as allocated. 00:04:59.486 EAL: No shared files mode enabled, IPC is disabled 00:04:59.486 EAL: No shared files mode enabled, IPC is disabled 00:04:59.486 EAL: TSC frequency is ~2600000 KHz 00:04:59.486 EAL: Main lcore 0 is ready (tid=7f468a8afa40;cpuset=[0]) 00:04:59.486 EAL: Trying to obtain current memory policy. 00:04:59.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.486 EAL: Restoring previous memory policy: 0 00:04:59.486 EAL: request: mp_malloc_sync 00:04:59.486 EAL: No shared files mode enabled, IPC is disabled 00:04:59.486 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.486 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.486 EAL: No shared files mode enabled, IPC is disabled 00:04:59.486 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.486 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.486 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:59.486 00:04:59.486 00:04:59.486 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.486 http://cunit.sourceforge.net/ 00:04:59.486 00:04:59.486 00:04:59.486 Suite: components_suite 00:04:59.745 Test: vtophys_malloc_test ...passed 00:04:59.745 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 4MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was shrunk by 4MB 00:05:00.006 EAL: Trying to obtain current memory policy. 00:05:00.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 6MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was shrunk by 6MB 00:05:00.006 EAL: Trying to obtain current memory policy. 00:05:00.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 10MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was shrunk by 10MB 00:05:00.006 EAL: Trying to obtain current memory policy. 00:05:00.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 18MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was shrunk by 18MB 00:05:00.006 EAL: Trying to obtain current memory policy. 00:05:00.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 34MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was shrunk by 34MB 00:05:00.006 EAL: Trying to obtain current memory policy. 00:05:00.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 66MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was shrunk by 66MB 00:05:00.006 EAL: Trying to obtain current memory policy. 00:05:00.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.006 EAL: Restoring previous memory policy: 4 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.006 EAL: Heap on socket 0 was expanded by 130MB 00:05:00.006 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.006 EAL: request: mp_malloc_sync 00:05:00.006 EAL: No shared files mode enabled, IPC is disabled 00:05:00.007 EAL: Heap on socket 0 was shrunk by 130MB 00:05:00.007 EAL: Trying to obtain current memory policy. 00:05:00.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.007 EAL: Restoring previous memory policy: 4 00:05:00.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.007 EAL: request: mp_malloc_sync 00:05:00.007 EAL: No shared files mode enabled, IPC is disabled 00:05:00.007 EAL: Heap on socket 0 was expanded by 258MB 00:05:00.007 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.007 EAL: request: mp_malloc_sync 00:05:00.007 EAL: No shared files mode enabled, IPC is disabled 00:05:00.007 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.007 EAL: Trying to obtain current memory policy. 00:05:00.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.268 EAL: Restoring previous memory policy: 4 00:05:00.268 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.268 EAL: request: mp_malloc_sync 00:05:00.268 EAL: No shared files mode enabled, IPC is disabled 00:05:00.268 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.268 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.268 EAL: request: mp_malloc_sync 00:05:00.268 EAL: No shared files mode enabled, IPC is disabled 00:05:00.268 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.268 EAL: Trying to obtain current memory policy. 00:05:00.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.527 EAL: Restoring previous memory policy: 4 00:05:00.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.527 EAL: request: mp_malloc_sync 00:05:00.527 EAL: No shared files mode enabled, IPC is disabled 00:05:00.527 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.527 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.787 passed 00:05:00.787 00:05:00.787 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.787 suites 1 1 n/a 0 0 00:05:00.787 tests 2 2 2 0 0 00:05:00.787 asserts 5218 5218 5218 0 n/a 00:05:00.787 00:05:00.787 Elapsed time = 1.116 seconds 00:05:00.787 EAL: request: mp_malloc_sync 00:05:00.787 EAL: No shared files mode enabled, IPC is disabled 00:05:00.787 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.787 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.787 EAL: request: mp_malloc_sync 00:05:00.787 EAL: No shared files mode enabled, IPC is disabled 00:05:00.787 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.787 EAL: No shared files mode enabled, IPC is disabled 00:05:00.787 EAL: No shared files mode enabled, IPC is disabled 00:05:00.788 EAL: No shared files mode enabled, IPC is disabled 00:05:00.788 00:05:00.788 real 0m1.360s 00:05:00.788 user 0m0.529s 00:05:00.788 sys 0m0.695s 00:05:00.788 ************************************ 00:05:00.788 END TEST env_vtophys 00:05:00.788 ************************************ 00:05:00.788 23:17:46 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.788 23:17:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.788 23:17:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.788 23:17:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.788 23:17:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.788 23:17:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.788 ************************************ 00:05:00.788 START TEST env_pci 00:05:00.788 ************************************ 00:05:00.788 23:17:46 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.788 00:05:00.788 00:05:00.788 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.788 http://cunit.sourceforge.net/ 00:05:00.788 00:05:00.788 00:05:00.788 Suite: pci 00:05:00.788 Test: pci_hook ...[2024-11-19 23:17:46.813837] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69296 has claimed it 00:05:00.788 passed 00:05:00.788 00:05:00.788 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.788 suites 1 1 n/a 0 0 00:05:00.788 tests 1 1 1 0 0 00:05:00.788 asserts 25 25 25 0 n/a 00:05:00.788 00:05:00.788 Elapsed time = 0.006 seconds 00:05:00.788 EAL: Cannot find device (10000:00:01.0) 00:05:00.788 EAL: Failed to attach device on primary process 00:05:00.788 00:05:00.788 real 0m0.054s 00:05:00.788 user 0m0.019s 00:05:00.788 sys 0m0.032s 00:05:00.788 23:17:46 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.788 ************************************ 00:05:00.788 END TEST env_pci 00:05:00.788 23:17:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.788 ************************************ 00:05:00.788 23:17:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.788 23:17:46 env -- env/env.sh@15 -- # uname 00:05:00.788 23:17:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.788 23:17:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.788 23:17:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.788 23:17:46 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:00.788 23:17:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.788 23:17:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.788 ************************************ 00:05:00.788 START TEST env_dpdk_post_init 00:05:00.788 ************************************ 00:05:00.788 23:17:46 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.788 EAL: Detected CPU lcores: 10 00:05:00.788 EAL: Detected NUMA nodes: 1 00:05:00.788 EAL: Detected shared linkage of DPDK 00:05:00.788 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.788 EAL: Selected IOVA mode 'PA' 00:05:01.049 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:01.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:01.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:01.049 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:01.049 Starting DPDK initialization... 00:05:01.049 Starting SPDK post initialization... 00:05:01.049 SPDK NVMe probe 00:05:01.049 Attaching to 0000:00:10.0 00:05:01.049 Attaching to 0000:00:11.0 00:05:01.049 Attaching to 0000:00:12.0 00:05:01.049 Attaching to 0000:00:13.0 00:05:01.049 Attached to 0000:00:13.0 00:05:01.049 Attached to 0000:00:10.0 00:05:01.049 Attached to 0000:00:11.0 00:05:01.049 Attached to 0000:00:12.0 00:05:01.049 Cleaning up... 00:05:01.049 00:05:01.049 real 0m0.242s 00:05:01.049 user 0m0.082s 00:05:01.049 sys 0m0.063s 00:05:01.049 ************************************ 00:05:01.049 END TEST env_dpdk_post_init 00:05:01.049 ************************************ 00:05:01.049 23:17:47 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.049 23:17:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.049 23:17:47 env -- env/env.sh@26 -- # uname 00:05:01.049 23:17:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.049 23:17:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.049 23:17:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.049 23:17:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.049 23:17:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.049 ************************************ 00:05:01.049 START TEST env_mem_callbacks 00:05:01.049 ************************************ 00:05:01.049 23:17:47 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.049 EAL: Detected CPU lcores: 10 00:05:01.049 EAL: Detected NUMA nodes: 1 00:05:01.049 EAL: Detected shared linkage of DPDK 00:05:01.310 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.310 EAL: Selected IOVA mode 'PA' 00:05:01.310 00:05:01.310 00:05:01.310 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.310 http://cunit.sourceforge.net/ 00:05:01.310 00:05:01.310 00:05:01.310 Suite: memory 00:05:01.310 Test: test ... 00:05:01.310 register 0x200000200000 2097152 00:05:01.310 malloc 3145728 00:05:01.310 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.310 register 0x200000400000 4194304 00:05:01.310 buf 0x200000500000 len 3145728 PASSED 00:05:01.310 malloc 64 00:05:01.310 buf 0x2000004fff40 len 64 PASSED 00:05:01.310 malloc 4194304 00:05:01.310 register 0x200000800000 6291456 00:05:01.310 buf 0x200000a00000 len 4194304 PASSED 00:05:01.310 free 0x200000500000 3145728 00:05:01.310 free 0x2000004fff40 64 00:05:01.310 unregister 0x200000400000 4194304 PASSED 00:05:01.310 free 0x200000a00000 4194304 00:05:01.310 unregister 0x200000800000 6291456 PASSED 00:05:01.310 malloc 8388608 00:05:01.310 register 0x200000400000 10485760 00:05:01.310 buf 0x200000600000 len 8388608 PASSED 00:05:01.310 free 0x200000600000 8388608 00:05:01.310 unregister 0x200000400000 10485760 PASSED 00:05:01.310 passed 00:05:01.310 00:05:01.310 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.310 suites 1 1 n/a 0 0 00:05:01.310 tests 1 1 1 0 0 00:05:01.310 asserts 15 15 15 0 n/a 00:05:01.310 00:05:01.310 Elapsed time = 0.012 seconds 00:05:01.310 00:05:01.310 real 0m0.177s 00:05:01.310 user 0m0.023s 00:05:01.310 sys 0m0.052s 00:05:01.310 23:17:47 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.310 23:17:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.310 ************************************ 00:05:01.310 END TEST env_mem_callbacks 00:05:01.310 ************************************ 00:05:01.310 00:05:01.310 real 0m2.528s 00:05:01.310 user 0m1.053s 00:05:01.310 sys 0m1.059s 00:05:01.310 ************************************ 00:05:01.310 END TEST env 00:05:01.310 23:17:47 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.310 23:17:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.310 ************************************ 00:05:01.310 23:17:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.310 23:17:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.310 23:17:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.310 23:17:47 -- common/autotest_common.sh@10 -- # set +x 00:05:01.310 ************************************ 00:05:01.310 START TEST rpc 00:05:01.310 ************************************ 00:05:01.310 23:17:47 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.572 * Looking for test storage... 00:05:01.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.572 23:17:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.572 23:17:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.572 23:17:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.572 23:17:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.572 23:17:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.572 23:17:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.572 23:17:47 rpc -- scripts/common.sh@345 -- # : 1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.572 23:17:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.572 23:17:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.572 23:17:47 rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.572 23:17:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.572 23:17:47 rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.572 23:17:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.572 23:17:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.572 23:17:47 rpc -- scripts/common.sh@368 -- # return 0 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.572 --rc genhtml_branch_coverage=1 00:05:01.572 --rc genhtml_function_coverage=1 00:05:01.572 --rc genhtml_legend=1 00:05:01.572 --rc geninfo_all_blocks=1 00:05:01.572 --rc geninfo_unexecuted_blocks=1 00:05:01.572 00:05:01.572 ' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.572 --rc genhtml_branch_coverage=1 00:05:01.572 --rc genhtml_function_coverage=1 00:05:01.572 --rc genhtml_legend=1 00:05:01.572 --rc geninfo_all_blocks=1 00:05:01.572 --rc geninfo_unexecuted_blocks=1 00:05:01.572 00:05:01.572 ' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.572 --rc genhtml_branch_coverage=1 00:05:01.572 --rc genhtml_function_coverage=1 00:05:01.572 --rc genhtml_legend=1 00:05:01.572 --rc geninfo_all_blocks=1 00:05:01.572 --rc geninfo_unexecuted_blocks=1 00:05:01.572 00:05:01.572 ' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.572 --rc genhtml_branch_coverage=1 00:05:01.572 --rc genhtml_function_coverage=1 00:05:01.572 --rc genhtml_legend=1 00:05:01.572 --rc geninfo_all_blocks=1 00:05:01.572 --rc geninfo_unexecuted_blocks=1 00:05:01.572 00:05:01.572 ' 00:05:01.572 23:17:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69417 00:05:01.572 23:17:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.572 23:17:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69417 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 69417 ']' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.572 23:17:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.572 23:17:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.572 [2024-11-19 23:17:47.734674] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:01.573 [2024-11-19 23:17:47.734855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69417 ] 00:05:01.854 [2024-11-19 23:17:47.896684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.854 [2024-11-19 23:17:47.916671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.854 [2024-11-19 23:17:47.916724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69417' to capture a snapshot of events at runtime. 00:05:01.854 [2024-11-19 23:17:47.916748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.854 [2024-11-19 23:17:47.916756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.854 [2024-11-19 23:17:47.916770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69417 for offline analysis/debug. 00:05:01.854 [2024-11-19 23:17:47.917094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.449 23:17:48 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.449 23:17:48 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.449 23:17:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.449 23:17:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.449 23:17:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.449 23:17:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.449 23:17:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.449 23:17:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.449 23:17:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.449 ************************************ 00:05:02.449 START TEST rpc_integrity 00:05:02.449 ************************************ 00:05:02.449 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:02.449 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.449 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.449 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.449 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.449 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.449 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.710 { 00:05:02.710 "name": "Malloc0", 00:05:02.710 "aliases": [ 00:05:02.710 "86952e5b-f924-46ae-929e-6bcddae435c9" 00:05:02.710 ], 00:05:02.710 "product_name": "Malloc disk", 00:05:02.710 "block_size": 512, 00:05:02.710 "num_blocks": 16384, 00:05:02.710 "uuid": "86952e5b-f924-46ae-929e-6bcddae435c9", 00:05:02.710 "assigned_rate_limits": { 00:05:02.710 "rw_ios_per_sec": 0, 00:05:02.710 "rw_mbytes_per_sec": 0, 00:05:02.710 "r_mbytes_per_sec": 0, 00:05:02.710 "w_mbytes_per_sec": 0 00:05:02.710 }, 00:05:02.710 "claimed": false, 00:05:02.710 "zoned": false, 00:05:02.710 "supported_io_types": { 00:05:02.710 "read": true, 00:05:02.710 "write": true, 00:05:02.710 "unmap": true, 00:05:02.710 "flush": true, 00:05:02.710 "reset": true, 00:05:02.710 "nvme_admin": false, 00:05:02.710 "nvme_io": false, 00:05:02.710 "nvme_io_md": false, 00:05:02.710 "write_zeroes": true, 00:05:02.710 "zcopy": true, 00:05:02.710 "get_zone_info": false, 00:05:02.710 "zone_management": false, 00:05:02.710 "zone_append": false, 00:05:02.710 "compare": false, 00:05:02.710 "compare_and_write": false, 00:05:02.710 "abort": true, 00:05:02.710 "seek_hole": false, 00:05:02.710 "seek_data": false, 00:05:02.710 "copy": true, 00:05:02.710 "nvme_iov_md": false 00:05:02.710 }, 00:05:02.710 "memory_domains": [ 00:05:02.710 { 00:05:02.710 "dma_device_id": "system", 00:05:02.710 "dma_device_type": 1 00:05:02.710 }, 00:05:02.710 { 00:05:02.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.710 "dma_device_type": 2 00:05:02.710 } 00:05:02.710 ], 00:05:02.710 "driver_specific": {} 00:05:02.710 } 00:05:02.710 ]' 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.710 [2024-11-19 23:17:48.703134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.710 [2024-11-19 23:17:48.703217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.710 [2024-11-19 23:17:48.703248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:02.710 [2024-11-19 23:17:48.703259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.710 [2024-11-19 23:17:48.705827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.710 [2024-11-19 23:17:48.705878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.710 Passthru0 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.710 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.710 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.710 { 00:05:02.710 "name": "Malloc0", 00:05:02.710 "aliases": [ 00:05:02.710 "86952e5b-f924-46ae-929e-6bcddae435c9" 00:05:02.710 ], 00:05:02.710 "product_name": "Malloc disk", 00:05:02.710 "block_size": 512, 00:05:02.710 "num_blocks": 16384, 00:05:02.710 "uuid": "86952e5b-f924-46ae-929e-6bcddae435c9", 00:05:02.710 "assigned_rate_limits": { 00:05:02.710 "rw_ios_per_sec": 0, 00:05:02.710 "rw_mbytes_per_sec": 0, 00:05:02.710 "r_mbytes_per_sec": 0, 00:05:02.710 "w_mbytes_per_sec": 0 00:05:02.710 }, 00:05:02.710 "claimed": true, 00:05:02.710 "claim_type": "exclusive_write", 00:05:02.710 "zoned": false, 00:05:02.710 "supported_io_types": { 00:05:02.710 "read": true, 00:05:02.710 "write": true, 00:05:02.710 "unmap": true, 00:05:02.710 "flush": true, 00:05:02.710 "reset": true, 00:05:02.710 "nvme_admin": false, 00:05:02.710 "nvme_io": false, 00:05:02.710 "nvme_io_md": false, 00:05:02.711 "write_zeroes": true, 00:05:02.711 "zcopy": true, 00:05:02.711 "get_zone_info": false, 00:05:02.711 "zone_management": false, 00:05:02.711 "zone_append": false, 00:05:02.711 "compare": false, 00:05:02.711 "compare_and_write": false, 00:05:02.711 "abort": true, 00:05:02.711 "seek_hole": false, 00:05:02.711 "seek_data": false, 00:05:02.711 "copy": true, 00:05:02.711 "nvme_iov_md": false 00:05:02.711 }, 00:05:02.711 "memory_domains": [ 00:05:02.711 { 00:05:02.711 "dma_device_id": "system", 00:05:02.711 "dma_device_type": 1 00:05:02.711 }, 00:05:02.711 { 00:05:02.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.711 "dma_device_type": 2 00:05:02.711 } 00:05:02.711 ], 00:05:02.711 "driver_specific": {} 00:05:02.711 }, 00:05:02.711 { 00:05:02.711 "name": "Passthru0", 00:05:02.711 "aliases": [ 00:05:02.711 "b8c63e57-eb2b-58c4-b2fb-77a5276dce80" 00:05:02.711 ], 00:05:02.711 "product_name": "passthru", 00:05:02.711 "block_size": 512, 00:05:02.711 "num_blocks": 16384, 00:05:02.711 "uuid": "b8c63e57-eb2b-58c4-b2fb-77a5276dce80", 00:05:02.711 "assigned_rate_limits": { 00:05:02.711 "rw_ios_per_sec": 0, 00:05:02.711 "rw_mbytes_per_sec": 0, 00:05:02.711 "r_mbytes_per_sec": 0, 00:05:02.711 "w_mbytes_per_sec": 0 00:05:02.711 }, 00:05:02.711 "claimed": false, 00:05:02.711 "zoned": false, 00:05:02.711 "supported_io_types": { 00:05:02.711 "read": true, 00:05:02.711 "write": true, 00:05:02.711 "unmap": true, 00:05:02.711 "flush": true, 00:05:02.711 "reset": true, 00:05:02.711 "nvme_admin": false, 00:05:02.711 "nvme_io": false, 00:05:02.711 "nvme_io_md": false, 00:05:02.711 "write_zeroes": true, 00:05:02.711 "zcopy": true, 00:05:02.711 "get_zone_info": false, 00:05:02.711 "zone_management": false, 00:05:02.711 "zone_append": false, 00:05:02.711 "compare": false, 00:05:02.711 "compare_and_write": false, 00:05:02.711 "abort": true, 00:05:02.711 "seek_hole": false, 00:05:02.711 "seek_data": false, 00:05:02.711 "copy": true, 00:05:02.711 "nvme_iov_md": false 00:05:02.711 }, 00:05:02.711 "memory_domains": [ 00:05:02.711 { 00:05:02.711 "dma_device_id": "system", 00:05:02.711 "dma_device_type": 1 00:05:02.711 }, 00:05:02.711 { 00:05:02.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.711 "dma_device_type": 2 00:05:02.711 } 00:05:02.711 ], 00:05:02.711 "driver_specific": { 00:05:02.711 "passthru": { 00:05:02.711 "name": "Passthru0", 00:05:02.711 "base_bdev_name": "Malloc0" 00:05:02.711 } 00:05:02.711 } 00:05:02.711 } 00:05:02.711 ]' 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.711 23:17:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.711 00:05:02.711 real 0m0.223s 00:05:02.711 user 0m0.124s 00:05:02.711 sys 0m0.032s 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.711 23:17:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.711 ************************************ 00:05:02.711 END TEST rpc_integrity 00:05:02.711 ************************************ 00:05:02.711 23:17:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.711 23:17:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.711 23:17:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.711 23:17:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.711 ************************************ 00:05:02.711 START TEST rpc_plugins 00:05:02.711 ************************************ 00:05:02.711 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:02.711 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.711 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.711 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.972 { 00:05:02.972 "name": "Malloc1", 00:05:02.972 "aliases": [ 00:05:02.972 "0c096854-7d61-4516-9db9-519bbb274324" 00:05:02.972 ], 00:05:02.972 "product_name": "Malloc disk", 00:05:02.972 "block_size": 4096, 00:05:02.972 "num_blocks": 256, 00:05:02.972 "uuid": "0c096854-7d61-4516-9db9-519bbb274324", 00:05:02.972 "assigned_rate_limits": { 00:05:02.972 "rw_ios_per_sec": 0, 00:05:02.972 "rw_mbytes_per_sec": 0, 00:05:02.972 "r_mbytes_per_sec": 0, 00:05:02.972 "w_mbytes_per_sec": 0 00:05:02.972 }, 00:05:02.972 "claimed": false, 00:05:02.972 "zoned": false, 00:05:02.972 "supported_io_types": { 00:05:02.972 "read": true, 00:05:02.972 "write": true, 00:05:02.972 "unmap": true, 00:05:02.972 "flush": true, 00:05:02.972 "reset": true, 00:05:02.972 "nvme_admin": false, 00:05:02.972 "nvme_io": false, 00:05:02.972 "nvme_io_md": false, 00:05:02.972 "write_zeroes": true, 00:05:02.972 "zcopy": true, 00:05:02.972 "get_zone_info": false, 00:05:02.972 "zone_management": false, 00:05:02.972 "zone_append": false, 00:05:02.972 "compare": false, 00:05:02.972 "compare_and_write": false, 00:05:02.972 "abort": true, 00:05:02.972 "seek_hole": false, 00:05:02.972 "seek_data": false, 00:05:02.972 "copy": true, 00:05:02.972 "nvme_iov_md": false 00:05:02.972 }, 00:05:02.972 "memory_domains": [ 00:05:02.972 { 00:05:02.972 "dma_device_id": "system", 00:05:02.972 "dma_device_type": 1 00:05:02.972 }, 00:05:02.972 { 00:05:02.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.972 "dma_device_type": 2 00:05:02.972 } 00:05:02.972 ], 00:05:02.972 "driver_specific": {} 00:05:02.972 } 00:05:02.972 ]' 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.972 23:17:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.972 00:05:02.972 real 0m0.111s 00:05:02.972 user 0m0.064s 00:05:02.972 sys 0m0.011s 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.972 23:17:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.972 ************************************ 00:05:02.972 END TEST rpc_plugins 00:05:02.972 ************************************ 00:05:02.972 23:17:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.972 23:17:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.972 23:17:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.972 23:17:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.973 ************************************ 00:05:02.973 START TEST rpc_trace_cmd_test 00:05:02.973 ************************************ 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:02.973 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69417", 00:05:02.973 "tpoint_group_mask": "0x8", 00:05:02.973 "iscsi_conn": { 00:05:02.973 "mask": "0x2", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "scsi": { 00:05:02.973 "mask": "0x4", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "bdev": { 00:05:02.973 "mask": "0x8", 00:05:02.973 "tpoint_mask": "0xffffffffffffffff" 00:05:02.973 }, 00:05:02.973 "nvmf_rdma": { 00:05:02.973 "mask": "0x10", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "nvmf_tcp": { 00:05:02.973 "mask": "0x20", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "ftl": { 00:05:02.973 "mask": "0x40", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "blobfs": { 00:05:02.973 "mask": "0x80", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "dsa": { 00:05:02.973 "mask": "0x200", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "thread": { 00:05:02.973 "mask": "0x400", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "nvme_pcie": { 00:05:02.973 "mask": "0x800", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "iaa": { 00:05:02.973 "mask": "0x1000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "nvme_tcp": { 00:05:02.973 "mask": "0x2000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "bdev_nvme": { 00:05:02.973 "mask": "0x4000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "sock": { 00:05:02.973 "mask": "0x8000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "blob": { 00:05:02.973 "mask": "0x10000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "bdev_raid": { 00:05:02.973 "mask": "0x20000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 }, 00:05:02.973 "scheduler": { 00:05:02.973 "mask": "0x40000", 00:05:02.973 "tpoint_mask": "0x0" 00:05:02.973 } 00:05:02.973 }' 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.973 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.235 00:05:03.235 real 0m0.169s 00:05:03.235 user 0m0.129s 00:05:03.235 sys 0m0.027s 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.235 ************************************ 00:05:03.235 END TEST rpc_trace_cmd_test 00:05:03.235 ************************************ 00:05:03.235 23:17:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.235 23:17:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.235 23:17:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.235 23:17:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.235 23:17:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.235 23:17:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.235 23:17:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.235 ************************************ 00:05:03.235 START TEST rpc_daemon_integrity 00:05:03.235 ************************************ 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.235 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.235 { 00:05:03.235 "name": "Malloc2", 00:05:03.235 "aliases": [ 00:05:03.235 "9cbea4bd-a23b-4492-9f65-8dc6d1c8ba60" 00:05:03.235 ], 00:05:03.235 "product_name": "Malloc disk", 00:05:03.235 "block_size": 512, 00:05:03.235 "num_blocks": 16384, 00:05:03.235 "uuid": "9cbea4bd-a23b-4492-9f65-8dc6d1c8ba60", 00:05:03.235 "assigned_rate_limits": { 00:05:03.235 "rw_ios_per_sec": 0, 00:05:03.235 "rw_mbytes_per_sec": 0, 00:05:03.235 "r_mbytes_per_sec": 0, 00:05:03.235 "w_mbytes_per_sec": 0 00:05:03.235 }, 00:05:03.235 "claimed": false, 00:05:03.235 "zoned": false, 00:05:03.235 "supported_io_types": { 00:05:03.235 "read": true, 00:05:03.235 "write": true, 00:05:03.235 "unmap": true, 00:05:03.235 "flush": true, 00:05:03.235 "reset": true, 00:05:03.235 "nvme_admin": false, 00:05:03.235 "nvme_io": false, 00:05:03.235 "nvme_io_md": false, 00:05:03.235 "write_zeroes": true, 00:05:03.235 "zcopy": true, 00:05:03.235 "get_zone_info": false, 00:05:03.235 "zone_management": false, 00:05:03.235 "zone_append": false, 00:05:03.235 "compare": false, 00:05:03.235 "compare_and_write": false, 00:05:03.235 "abort": true, 00:05:03.236 "seek_hole": false, 00:05:03.236 "seek_data": false, 00:05:03.236 "copy": true, 00:05:03.236 "nvme_iov_md": false 00:05:03.236 }, 00:05:03.236 "memory_domains": [ 00:05:03.236 { 00:05:03.236 "dma_device_id": "system", 00:05:03.236 "dma_device_type": 1 00:05:03.236 }, 00:05:03.236 { 00:05:03.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.236 "dma_device_type": 2 00:05:03.236 } 00:05:03.236 ], 00:05:03.236 "driver_specific": {} 00:05:03.236 } 00:05:03.236 ]' 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.236 [2024-11-19 23:17:49.384207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.236 [2024-11-19 23:17:49.384279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.236 [2024-11-19 23:17:49.384309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:03.236 [2024-11-19 23:17:49.384318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.236 [2024-11-19 23:17:49.386806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.236 [2024-11-19 23:17:49.386858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.236 Passthru0 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.236 { 00:05:03.236 "name": "Malloc2", 00:05:03.236 "aliases": [ 00:05:03.236 "9cbea4bd-a23b-4492-9f65-8dc6d1c8ba60" 00:05:03.236 ], 00:05:03.236 "product_name": "Malloc disk", 00:05:03.236 "block_size": 512, 00:05:03.236 "num_blocks": 16384, 00:05:03.236 "uuid": "9cbea4bd-a23b-4492-9f65-8dc6d1c8ba60", 00:05:03.236 "assigned_rate_limits": { 00:05:03.236 "rw_ios_per_sec": 0, 00:05:03.236 "rw_mbytes_per_sec": 0, 00:05:03.236 "r_mbytes_per_sec": 0, 00:05:03.236 "w_mbytes_per_sec": 0 00:05:03.236 }, 00:05:03.236 "claimed": true, 00:05:03.236 "claim_type": "exclusive_write", 00:05:03.236 "zoned": false, 00:05:03.236 "supported_io_types": { 00:05:03.236 "read": true, 00:05:03.236 "write": true, 00:05:03.236 "unmap": true, 00:05:03.236 "flush": true, 00:05:03.236 "reset": true, 00:05:03.236 "nvme_admin": false, 00:05:03.236 "nvme_io": false, 00:05:03.236 "nvme_io_md": false, 00:05:03.236 "write_zeroes": true, 00:05:03.236 "zcopy": true, 00:05:03.236 "get_zone_info": false, 00:05:03.236 "zone_management": false, 00:05:03.236 "zone_append": false, 00:05:03.236 "compare": false, 00:05:03.236 "compare_and_write": false, 00:05:03.236 "abort": true, 00:05:03.236 "seek_hole": false, 00:05:03.236 "seek_data": false, 00:05:03.236 "copy": true, 00:05:03.236 "nvme_iov_md": false 00:05:03.236 }, 00:05:03.236 "memory_domains": [ 00:05:03.236 { 00:05:03.236 "dma_device_id": "system", 00:05:03.236 "dma_device_type": 1 00:05:03.236 }, 00:05:03.236 { 00:05:03.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.236 "dma_device_type": 2 00:05:03.236 } 00:05:03.236 ], 00:05:03.236 "driver_specific": {} 00:05:03.236 }, 00:05:03.236 { 00:05:03.236 "name": "Passthru0", 00:05:03.236 "aliases": [ 00:05:03.236 "b820dddf-5693-5e90-b5fb-325f3d5c293a" 00:05:03.236 ], 00:05:03.236 "product_name": "passthru", 00:05:03.236 "block_size": 512, 00:05:03.236 "num_blocks": 16384, 00:05:03.236 "uuid": "b820dddf-5693-5e90-b5fb-325f3d5c293a", 00:05:03.236 "assigned_rate_limits": { 00:05:03.236 "rw_ios_per_sec": 0, 00:05:03.236 "rw_mbytes_per_sec": 0, 00:05:03.236 "r_mbytes_per_sec": 0, 00:05:03.236 "w_mbytes_per_sec": 0 00:05:03.236 }, 00:05:03.236 "claimed": false, 00:05:03.236 "zoned": false, 00:05:03.236 "supported_io_types": { 00:05:03.236 "read": true, 00:05:03.236 "write": true, 00:05:03.236 "unmap": true, 00:05:03.236 "flush": true, 00:05:03.236 "reset": true, 00:05:03.236 "nvme_admin": false, 00:05:03.236 "nvme_io": false, 00:05:03.236 "nvme_io_md": false, 00:05:03.236 "write_zeroes": true, 00:05:03.236 "zcopy": true, 00:05:03.236 "get_zone_info": false, 00:05:03.236 "zone_management": false, 00:05:03.236 "zone_append": false, 00:05:03.236 "compare": false, 00:05:03.236 "compare_and_write": false, 00:05:03.236 "abort": true, 00:05:03.236 "seek_hole": false, 00:05:03.236 "seek_data": false, 00:05:03.236 "copy": true, 00:05:03.236 "nvme_iov_md": false 00:05:03.236 }, 00:05:03.236 "memory_domains": [ 00:05:03.236 { 00:05:03.236 "dma_device_id": "system", 00:05:03.236 "dma_device_type": 1 00:05:03.236 }, 00:05:03.236 { 00:05:03.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.236 "dma_device_type": 2 00:05:03.236 } 00:05:03.236 ], 00:05:03.236 "driver_specific": { 00:05:03.236 "passthru": { 00:05:03.236 "name": "Passthru0", 00:05:03.236 "base_bdev_name": "Malloc2" 00:05:03.236 } 00:05:03.236 } 00:05:03.236 } 00:05:03.236 ]' 00:05:03.236 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.497 00:05:03.497 real 0m0.233s 00:05:03.497 user 0m0.130s 00:05:03.497 sys 0m0.035s 00:05:03.497 ************************************ 00:05:03.497 END TEST rpc_daemon_integrity 00:05:03.497 ************************************ 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.497 23:17:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.497 23:17:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.497 23:17:49 rpc -- rpc/rpc.sh@84 -- # killprocess 69417 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@954 -- # '[' -z 69417 ']' 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@958 -- # kill -0 69417 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69417 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69417' 00:05:03.497 killing process with pid 69417 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@973 -- # kill 69417 00:05:03.497 23:17:49 rpc -- common/autotest_common.sh@978 -- # wait 69417 00:05:03.758 00:05:03.758 real 0m2.412s 00:05:03.758 user 0m2.788s 00:05:03.758 sys 0m0.668s 00:05:03.758 23:17:49 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.758 ************************************ 00:05:03.758 END TEST rpc 00:05:03.758 ************************************ 00:05:03.758 23:17:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.019 23:17:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.019 23:17:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.019 23:17:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.019 23:17:49 -- common/autotest_common.sh@10 -- # set +x 00:05:04.019 ************************************ 00:05:04.019 START TEST skip_rpc 00:05:04.019 ************************************ 00:05:04.019 23:17:49 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.019 * Looking for test storage... 00:05:04.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.019 23:17:50 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.019 23:17:50 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.019 23:17:50 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.019 23:17:50 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.019 23:17:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.019 23:17:50 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.019 23:17:50 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.019 --rc genhtml_branch_coverage=1 00:05:04.019 --rc genhtml_function_coverage=1 00:05:04.019 --rc genhtml_legend=1 00:05:04.019 --rc geninfo_all_blocks=1 00:05:04.019 --rc geninfo_unexecuted_blocks=1 00:05:04.019 00:05:04.019 ' 00:05:04.020 23:17:50 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 23:17:50 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 23:17:50 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.020 --rc genhtml_branch_coverage=1 00:05:04.020 --rc genhtml_function_coverage=1 00:05:04.020 --rc genhtml_legend=1 00:05:04.020 --rc geninfo_all_blocks=1 00:05:04.020 --rc geninfo_unexecuted_blocks=1 00:05:04.020 00:05:04.020 ' 00:05:04.020 23:17:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.020 23:17:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.020 23:17:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.020 23:17:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.020 23:17:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.020 23:17:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.020 ************************************ 00:05:04.020 START TEST skip_rpc 00:05:04.020 ************************************ 00:05:04.020 23:17:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:04.020 23:17:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69624 00:05:04.020 23:17:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.020 23:17:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.020 23:17:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.279 [2024-11-19 23:17:50.208830] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:04.279 [2024-11-19 23:17:50.208936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69624 ] 00:05:04.279 [2024-11-19 23:17:50.363916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.279 [2024-11-19 23:17:50.390763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69624 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69624 ']' 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69624 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69624 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.580 killing process with pid 69624 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.580 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69624' 00:05:09.581 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69624 00:05:09.581 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69624 00:05:09.581 00:05:09.581 real 0m5.259s 00:05:09.581 user 0m4.840s 00:05:09.581 sys 0m0.314s 00:05:09.581 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.581 ************************************ 00:05:09.581 END TEST skip_rpc 00:05:09.581 ************************************ 00:05:09.581 23:17:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.581 23:17:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:09.581 23:17:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.581 23:17:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.581 23:17:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.581 ************************************ 00:05:09.581 START TEST skip_rpc_with_json 00:05:09.581 ************************************ 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69706 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69706 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69706 ']' 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.581 23:17:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.581 [2024-11-19 23:17:55.518527] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:09.581 [2024-11-19 23:17:55.518624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69706 ] 00:05:09.581 [2024-11-19 23:17:55.667173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.581 [2024-11-19 23:17:55.686399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.522 [2024-11-19 23:17:56.367402] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:10.522 request: 00:05:10.522 { 00:05:10.522 "trtype": "tcp", 00:05:10.522 "method": "nvmf_get_transports", 00:05:10.522 "req_id": 1 00:05:10.522 } 00:05:10.522 Got JSON-RPC error response 00:05:10.522 response: 00:05:10.522 { 00:05:10.522 "code": -19, 00:05:10.522 "message": "No such device" 00:05:10.522 } 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.522 [2024-11-19 23:17:56.375484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.522 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.522 { 00:05:10.522 "subsystems": [ 00:05:10.522 { 00:05:10.522 "subsystem": "fsdev", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "fsdev_set_opts", 00:05:10.522 "params": { 00:05:10.522 "fsdev_io_pool_size": 65535, 00:05:10.522 "fsdev_io_cache_size": 256 00:05:10.522 } 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "keyring", 00:05:10.522 "config": [] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "iobuf", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "iobuf_set_options", 00:05:10.522 "params": { 00:05:10.522 "small_pool_count": 8192, 00:05:10.522 "large_pool_count": 1024, 00:05:10.522 "small_bufsize": 8192, 00:05:10.522 "large_bufsize": 135168, 00:05:10.522 "enable_numa": false 00:05:10.522 } 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "sock", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "sock_set_default_impl", 00:05:10.522 "params": { 00:05:10.522 "impl_name": "posix" 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "sock_impl_set_options", 00:05:10.522 "params": { 00:05:10.522 "impl_name": "ssl", 00:05:10.522 "recv_buf_size": 4096, 00:05:10.522 "send_buf_size": 4096, 00:05:10.522 "enable_recv_pipe": true, 00:05:10.522 "enable_quickack": false, 00:05:10.522 "enable_placement_id": 0, 00:05:10.522 "enable_zerocopy_send_server": true, 00:05:10.522 "enable_zerocopy_send_client": false, 00:05:10.522 "zerocopy_threshold": 0, 00:05:10.522 "tls_version": 0, 00:05:10.522 "enable_ktls": false 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "sock_impl_set_options", 00:05:10.522 "params": { 00:05:10.522 "impl_name": "posix", 00:05:10.522 "recv_buf_size": 2097152, 00:05:10.522 "send_buf_size": 2097152, 00:05:10.522 "enable_recv_pipe": true, 00:05:10.522 "enable_quickack": false, 00:05:10.522 "enable_placement_id": 0, 00:05:10.522 "enable_zerocopy_send_server": true, 00:05:10.522 "enable_zerocopy_send_client": false, 00:05:10.522 "zerocopy_threshold": 0, 00:05:10.522 "tls_version": 0, 00:05:10.522 "enable_ktls": false 00:05:10.522 } 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "vmd", 00:05:10.522 "config": [] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "accel", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "accel_set_options", 00:05:10.522 "params": { 00:05:10.522 "small_cache_size": 128, 00:05:10.522 "large_cache_size": 16, 00:05:10.522 "task_count": 2048, 00:05:10.522 "sequence_count": 2048, 00:05:10.522 "buf_count": 2048 00:05:10.522 } 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "bdev", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "bdev_set_options", 00:05:10.522 "params": { 00:05:10.522 "bdev_io_pool_size": 65535, 00:05:10.522 "bdev_io_cache_size": 256, 00:05:10.522 "bdev_auto_examine": true, 00:05:10.522 "iobuf_small_cache_size": 128, 00:05:10.522 "iobuf_large_cache_size": 16 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "bdev_raid_set_options", 00:05:10.522 "params": { 00:05:10.522 "process_window_size_kb": 1024, 00:05:10.522 "process_max_bandwidth_mb_sec": 0 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "bdev_iscsi_set_options", 00:05:10.522 "params": { 00:05:10.522 "timeout_sec": 30 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "bdev_nvme_set_options", 00:05:10.522 "params": { 00:05:10.522 "action_on_timeout": "none", 00:05:10.522 "timeout_us": 0, 00:05:10.522 "timeout_admin_us": 0, 00:05:10.522 "keep_alive_timeout_ms": 10000, 00:05:10.522 "arbitration_burst": 0, 00:05:10.522 "low_priority_weight": 0, 00:05:10.522 "medium_priority_weight": 0, 00:05:10.522 "high_priority_weight": 0, 00:05:10.522 "nvme_adminq_poll_period_us": 10000, 00:05:10.522 "nvme_ioq_poll_period_us": 0, 00:05:10.522 "io_queue_requests": 0, 00:05:10.522 "delay_cmd_submit": true, 00:05:10.522 "transport_retry_count": 4, 00:05:10.522 "bdev_retry_count": 3, 00:05:10.522 "transport_ack_timeout": 0, 00:05:10.522 "ctrlr_loss_timeout_sec": 0, 00:05:10.522 "reconnect_delay_sec": 0, 00:05:10.522 "fast_io_fail_timeout_sec": 0, 00:05:10.522 "disable_auto_failback": false, 00:05:10.522 "generate_uuids": false, 00:05:10.522 "transport_tos": 0, 00:05:10.522 "nvme_error_stat": false, 00:05:10.522 "rdma_srq_size": 0, 00:05:10.522 "io_path_stat": false, 00:05:10.522 "allow_accel_sequence": false, 00:05:10.522 "rdma_max_cq_size": 0, 00:05:10.522 "rdma_cm_event_timeout_ms": 0, 00:05:10.522 "dhchap_digests": [ 00:05:10.522 "sha256", 00:05:10.522 "sha384", 00:05:10.522 "sha512" 00:05:10.522 ], 00:05:10.522 "dhchap_dhgroups": [ 00:05:10.522 "null", 00:05:10.522 "ffdhe2048", 00:05:10.522 "ffdhe3072", 00:05:10.522 "ffdhe4096", 00:05:10.522 "ffdhe6144", 00:05:10.522 "ffdhe8192" 00:05:10.522 ] 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "bdev_nvme_set_hotplug", 00:05:10.522 "params": { 00:05:10.522 "period_us": 100000, 00:05:10.522 "enable": false 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "bdev_wait_for_examine" 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "scsi", 00:05:10.522 "config": null 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "scheduler", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "framework_set_scheduler", 00:05:10.522 "params": { 00:05:10.522 "name": "static" 00:05:10.522 } 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "vhost_scsi", 00:05:10.522 "config": [] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "vhost_blk", 00:05:10.522 "config": [] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "ublk", 00:05:10.522 "config": [] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "nbd", 00:05:10.522 "config": [] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "nvmf", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "nvmf_set_config", 00:05:10.522 "params": { 00:05:10.522 "discovery_filter": "match_any", 00:05:10.522 "admin_cmd_passthru": { 00:05:10.522 "identify_ctrlr": false 00:05:10.522 }, 00:05:10.522 "dhchap_digests": [ 00:05:10.522 "sha256", 00:05:10.522 "sha384", 00:05:10.522 "sha512" 00:05:10.522 ], 00:05:10.522 "dhchap_dhgroups": [ 00:05:10.522 "null", 00:05:10.522 "ffdhe2048", 00:05:10.522 "ffdhe3072", 00:05:10.522 "ffdhe4096", 00:05:10.522 "ffdhe6144", 00:05:10.522 "ffdhe8192" 00:05:10.522 ] 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "nvmf_set_max_subsystems", 00:05:10.522 "params": { 00:05:10.522 "max_subsystems": 1024 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "nvmf_set_crdt", 00:05:10.522 "params": { 00:05:10.522 "crdt1": 0, 00:05:10.522 "crdt2": 0, 00:05:10.522 "crdt3": 0 00:05:10.522 } 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "method": "nvmf_create_transport", 00:05:10.522 "params": { 00:05:10.522 "trtype": "TCP", 00:05:10.522 "max_queue_depth": 128, 00:05:10.522 "max_io_qpairs_per_ctrlr": 127, 00:05:10.522 "in_capsule_data_size": 4096, 00:05:10.522 "max_io_size": 131072, 00:05:10.522 "io_unit_size": 131072, 00:05:10.522 "max_aq_depth": 128, 00:05:10.522 "num_shared_buffers": 511, 00:05:10.522 "buf_cache_size": 4294967295, 00:05:10.522 "dif_insert_or_strip": false, 00:05:10.522 "zcopy": false, 00:05:10.522 "c2h_success": true, 00:05:10.522 "sock_priority": 0, 00:05:10.522 "abort_timeout_sec": 1, 00:05:10.522 "ack_timeout": 0, 00:05:10.522 "data_wr_pool_size": 0 00:05:10.522 } 00:05:10.522 } 00:05:10.522 ] 00:05:10.522 }, 00:05:10.522 { 00:05:10.522 "subsystem": "iscsi", 00:05:10.522 "config": [ 00:05:10.522 { 00:05:10.522 "method": "iscsi_set_options", 00:05:10.522 "params": { 00:05:10.522 "node_base": "iqn.2016-06.io.spdk", 00:05:10.522 "max_sessions": 128, 00:05:10.522 "max_connections_per_session": 2, 00:05:10.522 "max_queue_depth": 64, 00:05:10.522 "default_time2wait": 2, 00:05:10.522 "default_time2retain": 20, 00:05:10.522 "first_burst_length": 8192, 00:05:10.522 "immediate_data": true, 00:05:10.522 "allow_duplicated_isid": false, 00:05:10.522 "error_recovery_level": 0, 00:05:10.522 "nop_timeout": 60, 00:05:10.522 "nop_in_interval": 30, 00:05:10.522 "disable_chap": false, 00:05:10.522 "require_chap": false, 00:05:10.522 "mutual_chap": false, 00:05:10.523 "chap_group": 0, 00:05:10.523 "max_large_datain_per_connection": 64, 00:05:10.523 "max_r2t_per_connection": 4, 00:05:10.523 "pdu_pool_size": 36864, 00:05:10.523 "immediate_data_pool_size": 16384, 00:05:10.523 "data_out_pool_size": 2048 00:05:10.523 } 00:05:10.523 } 00:05:10.523 ] 00:05:10.523 } 00:05:10.523 ] 00:05:10.523 } 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69706 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69706 ']' 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69706 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69706 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.523 killing process with pid 69706 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69706' 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69706 00:05:10.523 23:17:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69706 00:05:10.783 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69729 00:05:10.783 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.783 23:17:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69729 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69729 ']' 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69729 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69729 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.069 killing process with pid 69729 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69729' 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69729 00:05:16.069 23:18:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69729 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.069 00:05:16.069 real 0m6.582s 00:05:16.069 user 0m6.314s 00:05:16.069 sys 0m0.504s 00:05:16.069 ************************************ 00:05:16.069 END TEST skip_rpc_with_json 00:05:16.069 ************************************ 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.069 23:18:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.069 23:18:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.069 23:18:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.069 23:18:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.069 ************************************ 00:05:16.069 START TEST skip_rpc_with_delay 00:05:16.069 ************************************ 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.069 [2024-11-19 23:18:02.178000] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.069 ************************************ 00:05:16.069 END TEST skip_rpc_with_delay 00:05:16.069 ************************************ 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.069 00:05:16.069 real 0m0.125s 00:05:16.069 user 0m0.067s 00:05:16.069 sys 0m0.056s 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.069 23:18:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:16.330 23:18:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:16.330 23:18:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:16.331 23:18:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:16.331 23:18:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.331 23:18:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.331 23:18:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.331 ************************************ 00:05:16.331 START TEST exit_on_failed_rpc_init 00:05:16.331 ************************************ 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69841 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69841 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69841 ']' 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.331 23:18:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.331 [2024-11-19 23:18:02.369282] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:16.331 [2024-11-19 23:18:02.369441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69841 ] 00:05:16.591 [2024-11-19 23:18:02.525242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.591 [2024-11-19 23:18:02.548207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:17.182 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:17.182 [2024-11-19 23:18:03.280581] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:17.183 [2024-11-19 23:18:03.280699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69859 ] 00:05:17.443 [2024-11-19 23:18:03.437579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.443 [2024-11-19 23:18:03.455717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.443 [2024-11-19 23:18:03.455807] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:17.443 [2024-11-19 23:18:03.455829] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:17.443 [2024-11-19 23:18:03.455838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69841 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69841 ']' 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69841 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69841 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.443 killing process with pid 69841 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69841' 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69841 00:05:17.443 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69841 00:05:17.704 00:05:17.704 real 0m1.482s 00:05:17.704 user 0m1.611s 00:05:17.704 sys 0m0.396s 00:05:17.704 ************************************ 00:05:17.704 END TEST exit_on_failed_rpc_init 00:05:17.704 ************************************ 00:05:17.704 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.704 23:18:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.704 23:18:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.704 00:05:17.704 real 0m13.853s 00:05:17.704 user 0m12.983s 00:05:17.704 sys 0m1.455s 00:05:17.704 23:18:03 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.704 23:18:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.704 ************************************ 00:05:17.704 END TEST skip_rpc 00:05:17.704 ************************************ 00:05:17.704 23:18:03 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:17.704 23:18:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.704 23:18:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.704 23:18:03 -- common/autotest_common.sh@10 -- # set +x 00:05:17.704 ************************************ 00:05:17.704 START TEST rpc_client 00:05:17.704 ************************************ 00:05:17.704 23:18:03 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:17.964 * Looking for test storage... 00:05:17.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:17.964 23:18:03 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.964 23:18:03 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.964 23:18:03 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.964 23:18:04 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.964 --rc genhtml_branch_coverage=1 00:05:17.964 --rc genhtml_function_coverage=1 00:05:17.964 --rc genhtml_legend=1 00:05:17.964 --rc geninfo_all_blocks=1 00:05:17.964 --rc geninfo_unexecuted_blocks=1 00:05:17.964 00:05:17.964 ' 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.964 --rc genhtml_branch_coverage=1 00:05:17.964 --rc genhtml_function_coverage=1 00:05:17.964 --rc genhtml_legend=1 00:05:17.964 --rc geninfo_all_blocks=1 00:05:17.964 --rc geninfo_unexecuted_blocks=1 00:05:17.964 00:05:17.964 ' 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.964 --rc genhtml_branch_coverage=1 00:05:17.964 --rc genhtml_function_coverage=1 00:05:17.964 --rc genhtml_legend=1 00:05:17.964 --rc geninfo_all_blocks=1 00:05:17.964 --rc geninfo_unexecuted_blocks=1 00:05:17.964 00:05:17.964 ' 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.964 --rc genhtml_branch_coverage=1 00:05:17.964 --rc genhtml_function_coverage=1 00:05:17.964 --rc genhtml_legend=1 00:05:17.964 --rc geninfo_all_blocks=1 00:05:17.964 --rc geninfo_unexecuted_blocks=1 00:05:17.964 00:05:17.964 ' 00:05:17.964 23:18:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:17.964 OK 00:05:17.964 23:18:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:17.964 00:05:17.964 real 0m0.196s 00:05:17.964 user 0m0.115s 00:05:17.964 sys 0m0.088s 00:05:17.964 ************************************ 00:05:17.964 END TEST rpc_client 00:05:17.964 ************************************ 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.964 23:18:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:17.964 23:18:04 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:17.964 23:18:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.964 23:18:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.964 23:18:04 -- common/autotest_common.sh@10 -- # set +x 00:05:17.964 ************************************ 00:05:17.964 START TEST json_config 00:05:17.964 ************************************ 00:05:17.965 23:18:04 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.226 23:18:04 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.226 23:18:04 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.226 23:18:04 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.226 23:18:04 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.226 23:18:04 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.226 23:18:04 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:18.226 23:18:04 json_config -- scripts/common.sh@345 -- # : 1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.226 23:18:04 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.226 23:18:04 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@353 -- # local d=1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.226 23:18:04 json_config -- scripts/common.sh@355 -- # echo 1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.226 23:18:04 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@353 -- # local d=2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.226 23:18:04 json_config -- scripts/common.sh@355 -- # echo 2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.226 23:18:04 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.226 23:18:04 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.226 23:18:04 json_config -- scripts/common.sh@368 -- # return 0 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.226 --rc genhtml_branch_coverage=1 00:05:18.226 --rc genhtml_function_coverage=1 00:05:18.226 --rc genhtml_legend=1 00:05:18.226 --rc geninfo_all_blocks=1 00:05:18.226 --rc geninfo_unexecuted_blocks=1 00:05:18.226 00:05:18.226 ' 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.226 --rc genhtml_branch_coverage=1 00:05:18.226 --rc genhtml_function_coverage=1 00:05:18.226 --rc genhtml_legend=1 00:05:18.226 --rc geninfo_all_blocks=1 00:05:18.226 --rc geninfo_unexecuted_blocks=1 00:05:18.226 00:05:18.226 ' 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.226 --rc genhtml_branch_coverage=1 00:05:18.226 --rc genhtml_function_coverage=1 00:05:18.226 --rc genhtml_legend=1 00:05:18.226 --rc geninfo_all_blocks=1 00:05:18.226 --rc geninfo_unexecuted_blocks=1 00:05:18.226 00:05:18.226 ' 00:05:18.226 23:18:04 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.226 --rc genhtml_branch_coverage=1 00:05:18.226 --rc genhtml_function_coverage=1 00:05:18.226 --rc genhtml_legend=1 00:05:18.227 --rc geninfo_all_blocks=1 00:05:18.227 --rc geninfo_unexecuted_blocks=1 00:05:18.227 00:05:18.227 ' 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b93f5f7c-f437-4b52-82ca-ba312a95313a 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b93f5f7c-f437-4b52-82ca-ba312a95313a 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.227 23:18:04 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.227 23:18:04 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.227 23:18:04 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.227 23:18:04 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.227 23:18:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.227 23:18:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.227 23:18:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.227 23:18:04 json_config -- paths/export.sh@5 -- # export PATH 00:05:18.227 23:18:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@51 -- # : 0 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.227 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.227 23:18:04 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:18.227 WARNING: No tests are enabled so not running JSON configuration tests 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:18.227 23:18:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:18.227 00:05:18.227 real 0m0.128s 00:05:18.227 user 0m0.082s 00:05:18.227 sys 0m0.049s 00:05:18.227 23:18:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.227 23:18:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.227 ************************************ 00:05:18.227 END TEST json_config 00:05:18.227 ************************************ 00:05:18.227 23:18:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.227 23:18:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.227 23:18:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.227 23:18:04 -- common/autotest_common.sh@10 -- # set +x 00:05:18.227 ************************************ 00:05:18.227 START TEST json_config_extra_key 00:05:18.227 ************************************ 00:05:18.227 23:18:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.227 23:18:04 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.227 23:18:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.227 23:18:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.489 23:18:04 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.489 23:18:04 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.489 23:18:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.489 --rc genhtml_branch_coverage=1 00:05:18.489 --rc genhtml_function_coverage=1 00:05:18.489 --rc genhtml_legend=1 00:05:18.489 --rc geninfo_all_blocks=1 00:05:18.489 --rc geninfo_unexecuted_blocks=1 00:05:18.489 00:05:18.489 ' 00:05:18.489 23:18:04 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.489 --rc genhtml_branch_coverage=1 00:05:18.489 --rc genhtml_function_coverage=1 00:05:18.489 --rc genhtml_legend=1 00:05:18.489 --rc geninfo_all_blocks=1 00:05:18.489 --rc geninfo_unexecuted_blocks=1 00:05:18.489 00:05:18.489 ' 00:05:18.489 23:18:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.489 --rc genhtml_branch_coverage=1 00:05:18.489 --rc genhtml_function_coverage=1 00:05:18.489 --rc genhtml_legend=1 00:05:18.489 --rc geninfo_all_blocks=1 00:05:18.489 --rc geninfo_unexecuted_blocks=1 00:05:18.489 00:05:18.489 ' 00:05:18.489 23:18:04 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.489 --rc genhtml_branch_coverage=1 00:05:18.489 --rc genhtml_function_coverage=1 00:05:18.489 --rc genhtml_legend=1 00:05:18.489 --rc geninfo_all_blocks=1 00:05:18.489 --rc geninfo_unexecuted_blocks=1 00:05:18.489 00:05:18.489 ' 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b93f5f7c-f437-4b52-82ca-ba312a95313a 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b93f5f7c-f437-4b52-82ca-ba312a95313a 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.489 23:18:04 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.489 23:18:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.489 23:18:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.489 23:18:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.489 23:18:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.489 23:18:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.489 23:18:04 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.489 INFO: launching applications... 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.489 23:18:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.489 23:18:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70036 00:05:18.490 Waiting for target to run... 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70036 /var/tmp/spdk_tgt.sock 00:05:18.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.490 23:18:04 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70036 ']' 00:05:18.490 23:18:04 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.490 23:18:04 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.490 23:18:04 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.490 23:18:04 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.490 23:18:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.490 23:18:04 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.490 [2024-11-19 23:18:04.548240] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:18.490 [2024-11-19 23:18:04.548397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70036 ] 00:05:18.750 [2024-11-19 23:18:04.924384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.750 [2024-11-19 23:18:04.938467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.321 23:18:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.321 23:18:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:19.321 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.321 INFO: shutting down applications... 00:05:19.321 23:18:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.321 23:18:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70036 ]] 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70036 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70036 00:05:19.321 23:18:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70036 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:19.891 SPDK target shutdown done 00:05:19.891 Success 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.891 23:18:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.891 23:18:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:19.891 00:05:19.891 real 0m1.576s 00:05:19.891 user 0m1.188s 00:05:19.891 sys 0m0.423s 00:05:19.891 23:18:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.891 23:18:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.891 ************************************ 00:05:19.891 END TEST json_config_extra_key 00:05:19.891 ************************************ 00:05:19.891 23:18:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.891 23:18:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.891 23:18:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.891 23:18:05 -- common/autotest_common.sh@10 -- # set +x 00:05:19.891 ************************************ 00:05:19.891 START TEST alias_rpc 00:05:19.891 ************************************ 00:05:19.891 23:18:05 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:19.891 * Looking for test storage... 00:05:19.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:19.891 23:18:06 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.891 23:18:06 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.891 23:18:06 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.152 23:18:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.152 --rc genhtml_branch_coverage=1 00:05:20.152 --rc genhtml_function_coverage=1 00:05:20.152 --rc genhtml_legend=1 00:05:20.152 --rc geninfo_all_blocks=1 00:05:20.152 --rc geninfo_unexecuted_blocks=1 00:05:20.152 00:05:20.152 ' 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.152 --rc genhtml_branch_coverage=1 00:05:20.152 --rc genhtml_function_coverage=1 00:05:20.152 --rc genhtml_legend=1 00:05:20.152 --rc geninfo_all_blocks=1 00:05:20.152 --rc geninfo_unexecuted_blocks=1 00:05:20.152 00:05:20.152 ' 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.152 --rc genhtml_branch_coverage=1 00:05:20.152 --rc genhtml_function_coverage=1 00:05:20.152 --rc genhtml_legend=1 00:05:20.152 --rc geninfo_all_blocks=1 00:05:20.152 --rc geninfo_unexecuted_blocks=1 00:05:20.152 00:05:20.152 ' 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.152 --rc genhtml_branch_coverage=1 00:05:20.152 --rc genhtml_function_coverage=1 00:05:20.152 --rc genhtml_legend=1 00:05:20.152 --rc geninfo_all_blocks=1 00:05:20.152 --rc geninfo_unexecuted_blocks=1 00:05:20.152 00:05:20.152 ' 00:05:20.152 23:18:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.152 23:18:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70109 00:05:20.152 23:18:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70109 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70109 ']' 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.152 23:18:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.153 23:18:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.153 23:18:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.153 23:18:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.153 23:18:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.153 [2024-11-19 23:18:06.211477] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:20.153 [2024-11-19 23:18:06.211652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70109 ] 00:05:20.413 [2024-11-19 23:18:06.379372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.413 [2024-11-19 23:18:06.409043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.989 23:18:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.989 23:18:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.989 23:18:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:21.250 23:18:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70109 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70109 ']' 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70109 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70109 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.250 killing process with pid 70109 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70109' 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 70109 00:05:21.250 23:18:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 70109 00:05:21.511 00:05:21.511 real 0m1.639s 00:05:21.511 user 0m1.736s 00:05:21.511 sys 0m0.446s 00:05:21.511 ************************************ 00:05:21.511 END TEST alias_rpc 00:05:21.511 ************************************ 00:05:21.511 23:18:07 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.511 23:18:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.511 23:18:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.511 23:18:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.511 23:18:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.511 23:18:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.511 23:18:07 -- common/autotest_common.sh@10 -- # set +x 00:05:21.511 ************************************ 00:05:21.511 START TEST spdkcli_tcp 00:05:21.511 ************************************ 00:05:21.511 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.773 * Looking for test storage... 00:05:21.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.773 23:18:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.773 --rc genhtml_branch_coverage=1 00:05:21.773 --rc genhtml_function_coverage=1 00:05:21.773 --rc genhtml_legend=1 00:05:21.773 --rc geninfo_all_blocks=1 00:05:21.773 --rc geninfo_unexecuted_blocks=1 00:05:21.773 00:05:21.773 ' 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.773 --rc genhtml_branch_coverage=1 00:05:21.773 --rc genhtml_function_coverage=1 00:05:21.773 --rc genhtml_legend=1 00:05:21.773 --rc geninfo_all_blocks=1 00:05:21.773 --rc geninfo_unexecuted_blocks=1 00:05:21.773 00:05:21.773 ' 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.773 --rc genhtml_branch_coverage=1 00:05:21.773 --rc genhtml_function_coverage=1 00:05:21.773 --rc genhtml_legend=1 00:05:21.773 --rc geninfo_all_blocks=1 00:05:21.773 --rc geninfo_unexecuted_blocks=1 00:05:21.773 00:05:21.773 ' 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.773 --rc genhtml_branch_coverage=1 00:05:21.773 --rc genhtml_function_coverage=1 00:05:21.773 --rc genhtml_legend=1 00:05:21.773 --rc geninfo_all_blocks=1 00:05:21.773 --rc geninfo_unexecuted_blocks=1 00:05:21.773 00:05:21.773 ' 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.773 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.773 23:18:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.774 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70194 00:05:21.774 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70194 00:05:21.774 23:18:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.774 23:18:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70194 ']' 00:05:21.774 23:18:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.774 23:18:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.774 23:18:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.774 23:18:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.774 23:18:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.774 [2024-11-19 23:18:07.901842] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:21.774 [2024-11-19 23:18:07.902008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70194 ] 00:05:22.035 [2024-11-19 23:18:08.066307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.035 [2024-11-19 23:18:08.097329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.035 [2024-11-19 23:18:08.097391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.618 23:18:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.618 23:18:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:22.618 23:18:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70211 00:05:22.618 23:18:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.618 23:18:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.881 [ 00:05:22.881 "bdev_malloc_delete", 00:05:22.881 "bdev_malloc_create", 00:05:22.881 "bdev_null_resize", 00:05:22.881 "bdev_null_delete", 00:05:22.881 "bdev_null_create", 00:05:22.881 "bdev_nvme_cuse_unregister", 00:05:22.881 "bdev_nvme_cuse_register", 00:05:22.881 "bdev_opal_new_user", 00:05:22.881 "bdev_opal_set_lock_state", 00:05:22.881 "bdev_opal_delete", 00:05:22.881 "bdev_opal_get_info", 00:05:22.881 "bdev_opal_create", 00:05:22.881 "bdev_nvme_opal_revert", 00:05:22.881 "bdev_nvme_opal_init", 00:05:22.881 "bdev_nvme_send_cmd", 00:05:22.881 "bdev_nvme_set_keys", 00:05:22.881 "bdev_nvme_get_path_iostat", 00:05:22.881 "bdev_nvme_get_mdns_discovery_info", 00:05:22.881 "bdev_nvme_stop_mdns_discovery", 00:05:22.881 "bdev_nvme_start_mdns_discovery", 00:05:22.881 "bdev_nvme_set_multipath_policy", 00:05:22.881 "bdev_nvme_set_preferred_path", 00:05:22.881 "bdev_nvme_get_io_paths", 00:05:22.881 "bdev_nvme_remove_error_injection", 00:05:22.881 "bdev_nvme_add_error_injection", 00:05:22.881 "bdev_nvme_get_discovery_info", 00:05:22.881 "bdev_nvme_stop_discovery", 00:05:22.881 "bdev_nvme_start_discovery", 00:05:22.881 "bdev_nvme_get_controller_health_info", 00:05:22.881 "bdev_nvme_disable_controller", 00:05:22.881 "bdev_nvme_enable_controller", 00:05:22.881 "bdev_nvme_reset_controller", 00:05:22.881 "bdev_nvme_get_transport_statistics", 00:05:22.881 "bdev_nvme_apply_firmware", 00:05:22.881 "bdev_nvme_detach_controller", 00:05:22.881 "bdev_nvme_get_controllers", 00:05:22.881 "bdev_nvme_attach_controller", 00:05:22.881 "bdev_nvme_set_hotplug", 00:05:22.881 "bdev_nvme_set_options", 00:05:22.881 "bdev_passthru_delete", 00:05:22.881 "bdev_passthru_create", 00:05:22.881 "bdev_lvol_set_parent_bdev", 00:05:22.881 "bdev_lvol_set_parent", 00:05:22.881 "bdev_lvol_check_shallow_copy", 00:05:22.881 "bdev_lvol_start_shallow_copy", 00:05:22.881 "bdev_lvol_grow_lvstore", 00:05:22.881 "bdev_lvol_get_lvols", 00:05:22.881 "bdev_lvol_get_lvstores", 00:05:22.881 "bdev_lvol_delete", 00:05:22.881 "bdev_lvol_set_read_only", 00:05:22.881 "bdev_lvol_resize", 00:05:22.881 "bdev_lvol_decouple_parent", 00:05:22.881 "bdev_lvol_inflate", 00:05:22.881 "bdev_lvol_rename", 00:05:22.881 "bdev_lvol_clone_bdev", 00:05:22.881 "bdev_lvol_clone", 00:05:22.881 "bdev_lvol_snapshot", 00:05:22.881 "bdev_lvol_create", 00:05:22.881 "bdev_lvol_delete_lvstore", 00:05:22.881 "bdev_lvol_rename_lvstore", 00:05:22.881 "bdev_lvol_create_lvstore", 00:05:22.881 "bdev_raid_set_options", 00:05:22.881 "bdev_raid_remove_base_bdev", 00:05:22.881 "bdev_raid_add_base_bdev", 00:05:22.881 "bdev_raid_delete", 00:05:22.881 "bdev_raid_create", 00:05:22.881 "bdev_raid_get_bdevs", 00:05:22.881 "bdev_error_inject_error", 00:05:22.881 "bdev_error_delete", 00:05:22.881 "bdev_error_create", 00:05:22.881 "bdev_split_delete", 00:05:22.881 "bdev_split_create", 00:05:22.881 "bdev_delay_delete", 00:05:22.881 "bdev_delay_create", 00:05:22.881 "bdev_delay_update_latency", 00:05:22.881 "bdev_zone_block_delete", 00:05:22.881 "bdev_zone_block_create", 00:05:22.881 "blobfs_create", 00:05:22.881 "blobfs_detect", 00:05:22.881 "blobfs_set_cache_size", 00:05:22.881 "bdev_xnvme_delete", 00:05:22.881 "bdev_xnvme_create", 00:05:22.881 "bdev_aio_delete", 00:05:22.881 "bdev_aio_rescan", 00:05:22.881 "bdev_aio_create", 00:05:22.881 "bdev_ftl_set_property", 00:05:22.881 "bdev_ftl_get_properties", 00:05:22.881 "bdev_ftl_get_stats", 00:05:22.881 "bdev_ftl_unmap", 00:05:22.881 "bdev_ftl_unload", 00:05:22.881 "bdev_ftl_delete", 00:05:22.881 "bdev_ftl_load", 00:05:22.881 "bdev_ftl_create", 00:05:22.881 "bdev_virtio_attach_controller", 00:05:22.881 "bdev_virtio_scsi_get_devices", 00:05:22.881 "bdev_virtio_detach_controller", 00:05:22.881 "bdev_virtio_blk_set_hotplug", 00:05:22.881 "bdev_iscsi_delete", 00:05:22.881 "bdev_iscsi_create", 00:05:22.881 "bdev_iscsi_set_options", 00:05:22.881 "accel_error_inject_error", 00:05:22.881 "ioat_scan_accel_module", 00:05:22.881 "dsa_scan_accel_module", 00:05:22.881 "iaa_scan_accel_module", 00:05:22.881 "keyring_file_remove_key", 00:05:22.881 "keyring_file_add_key", 00:05:22.881 "keyring_linux_set_options", 00:05:22.881 "fsdev_aio_delete", 00:05:22.881 "fsdev_aio_create", 00:05:22.881 "iscsi_get_histogram", 00:05:22.881 "iscsi_enable_histogram", 00:05:22.881 "iscsi_set_options", 00:05:22.881 "iscsi_get_auth_groups", 00:05:22.881 "iscsi_auth_group_remove_secret", 00:05:22.881 "iscsi_auth_group_add_secret", 00:05:22.881 "iscsi_delete_auth_group", 00:05:22.881 "iscsi_create_auth_group", 00:05:22.881 "iscsi_set_discovery_auth", 00:05:22.881 "iscsi_get_options", 00:05:22.881 "iscsi_target_node_request_logout", 00:05:22.881 "iscsi_target_node_set_redirect", 00:05:22.881 "iscsi_target_node_set_auth", 00:05:22.881 "iscsi_target_node_add_lun", 00:05:22.881 "iscsi_get_stats", 00:05:22.882 "iscsi_get_connections", 00:05:22.882 "iscsi_portal_group_set_auth", 00:05:22.882 "iscsi_start_portal_group", 00:05:22.882 "iscsi_delete_portal_group", 00:05:22.882 "iscsi_create_portal_group", 00:05:22.882 "iscsi_get_portal_groups", 00:05:22.882 "iscsi_delete_target_node", 00:05:22.882 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.882 "iscsi_target_node_add_pg_ig_maps", 00:05:22.882 "iscsi_create_target_node", 00:05:22.882 "iscsi_get_target_nodes", 00:05:22.882 "iscsi_delete_initiator_group", 00:05:22.882 "iscsi_initiator_group_remove_initiators", 00:05:22.882 "iscsi_initiator_group_add_initiators", 00:05:22.882 "iscsi_create_initiator_group", 00:05:22.882 "iscsi_get_initiator_groups", 00:05:22.882 "nvmf_set_crdt", 00:05:22.882 "nvmf_set_config", 00:05:22.882 "nvmf_set_max_subsystems", 00:05:22.882 "nvmf_stop_mdns_prr", 00:05:22.882 "nvmf_publish_mdns_prr", 00:05:22.882 "nvmf_subsystem_get_listeners", 00:05:22.882 "nvmf_subsystem_get_qpairs", 00:05:22.882 "nvmf_subsystem_get_controllers", 00:05:22.882 "nvmf_get_stats", 00:05:22.882 "nvmf_get_transports", 00:05:22.882 "nvmf_create_transport", 00:05:22.882 "nvmf_get_targets", 00:05:22.882 "nvmf_delete_target", 00:05:22.882 "nvmf_create_target", 00:05:22.882 "nvmf_subsystem_allow_any_host", 00:05:22.882 "nvmf_subsystem_set_keys", 00:05:22.882 "nvmf_subsystem_remove_host", 00:05:22.882 "nvmf_subsystem_add_host", 00:05:22.882 "nvmf_ns_remove_host", 00:05:22.882 "nvmf_ns_add_host", 00:05:22.882 "nvmf_subsystem_remove_ns", 00:05:22.882 "nvmf_subsystem_set_ns_ana_group", 00:05:22.882 "nvmf_subsystem_add_ns", 00:05:22.882 "nvmf_subsystem_listener_set_ana_state", 00:05:22.882 "nvmf_discovery_get_referrals", 00:05:22.882 "nvmf_discovery_remove_referral", 00:05:22.882 "nvmf_discovery_add_referral", 00:05:22.882 "nvmf_subsystem_remove_listener", 00:05:22.882 "nvmf_subsystem_add_listener", 00:05:22.882 "nvmf_delete_subsystem", 00:05:22.882 "nvmf_create_subsystem", 00:05:22.882 "nvmf_get_subsystems", 00:05:22.882 "env_dpdk_get_mem_stats", 00:05:22.882 "nbd_get_disks", 00:05:22.882 "nbd_stop_disk", 00:05:22.882 "nbd_start_disk", 00:05:22.882 "ublk_recover_disk", 00:05:22.882 "ublk_get_disks", 00:05:22.882 "ublk_stop_disk", 00:05:22.882 "ublk_start_disk", 00:05:22.882 "ublk_destroy_target", 00:05:22.882 "ublk_create_target", 00:05:22.882 "virtio_blk_create_transport", 00:05:22.882 "virtio_blk_get_transports", 00:05:22.882 "vhost_controller_set_coalescing", 00:05:22.882 "vhost_get_controllers", 00:05:22.882 "vhost_delete_controller", 00:05:22.882 "vhost_create_blk_controller", 00:05:22.882 "vhost_scsi_controller_remove_target", 00:05:22.882 "vhost_scsi_controller_add_target", 00:05:22.882 "vhost_start_scsi_controller", 00:05:22.882 "vhost_create_scsi_controller", 00:05:22.882 "thread_set_cpumask", 00:05:22.882 "scheduler_set_options", 00:05:22.882 "framework_get_governor", 00:05:22.882 "framework_get_scheduler", 00:05:22.882 "framework_set_scheduler", 00:05:22.882 "framework_get_reactors", 00:05:22.882 "thread_get_io_channels", 00:05:22.882 "thread_get_pollers", 00:05:22.882 "thread_get_stats", 00:05:22.882 "framework_monitor_context_switch", 00:05:22.882 "spdk_kill_instance", 00:05:22.882 "log_enable_timestamps", 00:05:22.882 "log_get_flags", 00:05:22.882 "log_clear_flag", 00:05:22.882 "log_set_flag", 00:05:22.882 "log_get_level", 00:05:22.882 "log_set_level", 00:05:22.882 "log_get_print_level", 00:05:22.882 "log_set_print_level", 00:05:22.882 "framework_enable_cpumask_locks", 00:05:22.882 "framework_disable_cpumask_locks", 00:05:22.882 "framework_wait_init", 00:05:22.882 "framework_start_init", 00:05:22.882 "scsi_get_devices", 00:05:22.882 "bdev_get_histogram", 00:05:22.882 "bdev_enable_histogram", 00:05:22.882 "bdev_set_qos_limit", 00:05:22.882 "bdev_set_qd_sampling_period", 00:05:22.882 "bdev_get_bdevs", 00:05:22.882 "bdev_reset_iostat", 00:05:22.882 "bdev_get_iostat", 00:05:22.882 "bdev_examine", 00:05:22.882 "bdev_wait_for_examine", 00:05:22.882 "bdev_set_options", 00:05:22.882 "accel_get_stats", 00:05:22.882 "accel_set_options", 00:05:22.882 "accel_set_driver", 00:05:22.882 "accel_crypto_key_destroy", 00:05:22.882 "accel_crypto_keys_get", 00:05:22.882 "accel_crypto_key_create", 00:05:22.882 "accel_assign_opc", 00:05:22.882 "accel_get_module_info", 00:05:22.882 "accel_get_opc_assignments", 00:05:22.882 "vmd_rescan", 00:05:22.882 "vmd_remove_device", 00:05:22.882 "vmd_enable", 00:05:22.882 "sock_get_default_impl", 00:05:22.882 "sock_set_default_impl", 00:05:22.882 "sock_impl_set_options", 00:05:22.882 "sock_impl_get_options", 00:05:22.882 "iobuf_get_stats", 00:05:22.882 "iobuf_set_options", 00:05:22.882 "keyring_get_keys", 00:05:22.882 "framework_get_pci_devices", 00:05:22.882 "framework_get_config", 00:05:22.882 "framework_get_subsystems", 00:05:22.882 "fsdev_set_opts", 00:05:22.882 "fsdev_get_opts", 00:05:22.882 "trace_get_info", 00:05:22.882 "trace_get_tpoint_group_mask", 00:05:22.882 "trace_disable_tpoint_group", 00:05:22.882 "trace_enable_tpoint_group", 00:05:22.882 "trace_clear_tpoint_mask", 00:05:22.882 "trace_set_tpoint_mask", 00:05:22.882 "notify_get_notifications", 00:05:22.882 "notify_get_types", 00:05:22.882 "spdk_get_version", 00:05:22.882 "rpc_get_methods" 00:05:22.882 ] 00:05:22.882 23:18:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.882 23:18:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.882 23:18:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.882 23:18:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.882 23:18:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70194 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70194 ']' 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70194 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70194 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70194' 00:05:22.882 killing process with pid 70194 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70194 00:05:22.882 23:18:09 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70194 00:05:23.161 00:05:23.161 real 0m1.669s 00:05:23.161 user 0m2.878s 00:05:23.161 sys 0m0.506s 00:05:23.161 ************************************ 00:05:23.161 END TEST spdkcli_tcp 00:05:23.161 ************************************ 00:05:23.161 23:18:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.161 23:18:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.426 23:18:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.426 23:18:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.426 23:18:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.426 23:18:09 -- common/autotest_common.sh@10 -- # set +x 00:05:23.426 ************************************ 00:05:23.426 START TEST dpdk_mem_utility 00:05:23.426 ************************************ 00:05:23.426 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.426 * Looking for test storage... 00:05:23.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:23.426 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.426 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.426 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.426 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.426 23:18:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.427 23:18:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.427 --rc genhtml_branch_coverage=1 00:05:23.427 --rc genhtml_function_coverage=1 00:05:23.427 --rc genhtml_legend=1 00:05:23.427 --rc geninfo_all_blocks=1 00:05:23.427 --rc geninfo_unexecuted_blocks=1 00:05:23.427 00:05:23.427 ' 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.427 --rc genhtml_branch_coverage=1 00:05:23.427 --rc genhtml_function_coverage=1 00:05:23.427 --rc genhtml_legend=1 00:05:23.427 --rc geninfo_all_blocks=1 00:05:23.427 --rc geninfo_unexecuted_blocks=1 00:05:23.427 00:05:23.427 ' 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.427 --rc genhtml_branch_coverage=1 00:05:23.427 --rc genhtml_function_coverage=1 00:05:23.427 --rc genhtml_legend=1 00:05:23.427 --rc geninfo_all_blocks=1 00:05:23.427 --rc geninfo_unexecuted_blocks=1 00:05:23.427 00:05:23.427 ' 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.427 --rc genhtml_branch_coverage=1 00:05:23.427 --rc genhtml_function_coverage=1 00:05:23.427 --rc genhtml_legend=1 00:05:23.427 --rc geninfo_all_blocks=1 00:05:23.427 --rc geninfo_unexecuted_blocks=1 00:05:23.427 00:05:23.427 ' 00:05:23.427 23:18:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.427 23:18:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70289 00:05:23.427 23:18:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70289 00:05:23.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70289 ']' 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.427 23:18:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.427 23:18:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.689 [2024-11-19 23:18:09.632127] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:23.689 [2024-11-19 23:18:09.632287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70289 ] 00:05:23.689 [2024-11-19 23:18:09.792524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.689 [2024-11-19 23:18:09.821987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.634 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.634 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:24.634 23:18:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.634 23:18:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.634 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.634 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.634 { 00:05:24.634 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.634 } 00:05:24.634 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.634 23:18:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.634 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:24.634 1 heaps totaling size 810.000000 MiB 00:05:24.634 size: 810.000000 MiB heap id: 0 00:05:24.634 end heaps---------- 00:05:24.634 9 mempools totaling size 595.772034 MiB 00:05:24.634 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.634 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.634 size: 92.545471 MiB name: bdev_io_70289 00:05:24.634 size: 50.003479 MiB name: msgpool_70289 00:05:24.634 size: 36.509338 MiB name: fsdev_io_70289 00:05:24.634 size: 21.763794 MiB name: PDU_Pool 00:05:24.634 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.634 size: 4.133484 MiB name: evtpool_70289 00:05:24.634 size: 0.026123 MiB name: Session_Pool 00:05:24.634 end mempools------- 00:05:24.634 6 memzones totaling size 4.142822 MiB 00:05:24.634 size: 1.000366 MiB name: RG_ring_0_70289 00:05:24.634 size: 1.000366 MiB name: RG_ring_1_70289 00:05:24.634 size: 1.000366 MiB name: RG_ring_4_70289 00:05:24.634 size: 1.000366 MiB name: RG_ring_5_70289 00:05:24.634 size: 0.125366 MiB name: RG_ring_2_70289 00:05:24.634 size: 0.015991 MiB name: RG_ring_3_70289 00:05:24.634 end memzones------- 00:05:24.634 23:18:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.634 heap id: 0 total size: 810.000000 MiB number of busy elements: 310 number of free elements: 15 00:05:24.634 list of free elements. size: 10.813782 MiB 00:05:24.635 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:24.635 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:24.635 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:24.635 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:24.635 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:24.635 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:24.635 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:24.635 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:24.635 element at address: 0x20001a600000 with size: 0.568237 MiB 00:05:24.635 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:24.635 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:24.635 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:24.635 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:24.635 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:24.635 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:24.635 list of standard malloc elements. size: 199.267334 MiB 00:05:24.635 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:24.635 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:24.635 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:24.635 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:24.635 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:24.635 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.635 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:24.635 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.635 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:24.635 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:24.635 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:24.635 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:24.636 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:24.636 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:24.637 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:24.637 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:24.637 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:24.637 list of memzone associated elements. size: 599.918884 MiB 00:05:24.637 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:24.637 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.637 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:24.637 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.637 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:24.637 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70289_0 00:05:24.637 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:24.637 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70289_0 00:05:24.637 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:24.637 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70289_0 00:05:24.637 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:24.637 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.637 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:24.637 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.637 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:24.637 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70289_0 00:05:24.637 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:24.637 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70289 00:05:24.638 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.638 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70289 00:05:24.638 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:24.638 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.638 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:24.638 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.638 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:24.638 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.638 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:24.638 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.638 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:24.638 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70289 00:05:24.638 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:24.638 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70289 00:05:24.638 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:24.638 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70289 00:05:24.638 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:24.638 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70289 00:05:24.638 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:24.638 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70289 00:05:24.638 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:24.638 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70289 00:05:24.638 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:24.638 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.638 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:24.638 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.638 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:24.638 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.638 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:24.638 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70289 00:05:24.638 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:24.638 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70289 00:05:24.638 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:24.638 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.638 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:24.638 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.638 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:24.638 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70289 00:05:24.638 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:24.638 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.638 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:24.638 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70289 00:05:24.638 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:24.638 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70289 00:05:24.638 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:24.638 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70289 00:05:24.638 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:24.638 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.638 23:18:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.638 23:18:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70289 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70289 ']' 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70289 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70289 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.638 killing process with pid 70289 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70289' 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70289 00:05:24.638 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70289 00:05:24.899 00:05:24.899 real 0m1.530s 00:05:24.899 user 0m1.506s 00:05:24.899 sys 0m0.461s 00:05:24.899 ************************************ 00:05:24.899 END TEST dpdk_mem_utility 00:05:24.899 ************************************ 00:05:24.899 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.899 23:18:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.899 23:18:10 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:24.899 23:18:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.899 23:18:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.899 23:18:10 -- common/autotest_common.sh@10 -- # set +x 00:05:24.899 ************************************ 00:05:24.899 START TEST event 00:05:24.899 ************************************ 00:05:24.899 23:18:10 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:24.899 * Looking for test storage... 00:05:24.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:24.899 23:18:11 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.899 23:18:11 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.899 23:18:11 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.160 23:18:11 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.160 23:18:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.160 23:18:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.160 23:18:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.160 23:18:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.160 23:18:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.160 23:18:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.160 23:18:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.160 23:18:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.160 23:18:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.160 23:18:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.160 23:18:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.160 23:18:11 event -- scripts/common.sh@344 -- # case "$op" in 00:05:25.160 23:18:11 event -- scripts/common.sh@345 -- # : 1 00:05:25.160 23:18:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.160 23:18:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.160 23:18:11 event -- scripts/common.sh@365 -- # decimal 1 00:05:25.160 23:18:11 event -- scripts/common.sh@353 -- # local d=1 00:05:25.160 23:18:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.160 23:18:11 event -- scripts/common.sh@355 -- # echo 1 00:05:25.160 23:18:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.160 23:18:11 event -- scripts/common.sh@366 -- # decimal 2 00:05:25.160 23:18:11 event -- scripts/common.sh@353 -- # local d=2 00:05:25.160 23:18:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.160 23:18:11 event -- scripts/common.sh@355 -- # echo 2 00:05:25.160 23:18:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.160 23:18:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.160 23:18:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.160 23:18:11 event -- scripts/common.sh@368 -- # return 0 00:05:25.160 23:18:11 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.160 23:18:11 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.160 --rc genhtml_branch_coverage=1 00:05:25.160 --rc genhtml_function_coverage=1 00:05:25.160 --rc genhtml_legend=1 00:05:25.160 --rc geninfo_all_blocks=1 00:05:25.160 --rc geninfo_unexecuted_blocks=1 00:05:25.160 00:05:25.160 ' 00:05:25.160 23:18:11 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.160 --rc genhtml_branch_coverage=1 00:05:25.160 --rc genhtml_function_coverage=1 00:05:25.161 --rc genhtml_legend=1 00:05:25.161 --rc geninfo_all_blocks=1 00:05:25.161 --rc geninfo_unexecuted_blocks=1 00:05:25.161 00:05:25.161 ' 00:05:25.161 23:18:11 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.161 --rc genhtml_branch_coverage=1 00:05:25.161 --rc genhtml_function_coverage=1 00:05:25.161 --rc genhtml_legend=1 00:05:25.161 --rc geninfo_all_blocks=1 00:05:25.161 --rc geninfo_unexecuted_blocks=1 00:05:25.161 00:05:25.161 ' 00:05:25.161 23:18:11 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.161 --rc genhtml_branch_coverage=1 00:05:25.161 --rc genhtml_function_coverage=1 00:05:25.161 --rc genhtml_legend=1 00:05:25.161 --rc geninfo_all_blocks=1 00:05:25.161 --rc geninfo_unexecuted_blocks=1 00:05:25.161 00:05:25.161 ' 00:05:25.161 23:18:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.161 23:18:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.161 23:18:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.161 23:18:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:25.161 23:18:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.161 23:18:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.161 ************************************ 00:05:25.161 START TEST event_perf 00:05:25.161 ************************************ 00:05:25.161 23:18:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.161 Running I/O for 1 seconds...[2024-11-19 23:18:11.192681] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:25.161 [2024-11-19 23:18:11.192892] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70369 ] 00:05:25.422 [2024-11-19 23:18:11.369337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.422 [2024-11-19 23:18:11.402376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.422 [2024-11-19 23:18:11.402696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.422 [2024-11-19 23:18:11.403058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.422 Running I/O for 1 seconds...[2024-11-19 23:18:11.403098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.367 00:05:26.367 lcore 0: 139631 00:05:26.367 lcore 1: 139629 00:05:26.367 lcore 2: 139628 00:05:26.367 lcore 3: 139630 00:05:26.367 done. 00:05:26.367 00:05:26.367 real 0m1.308s 00:05:26.367 user 0m4.078s 00:05:26.367 sys 0m0.108s 00:05:26.367 23:18:12 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.367 ************************************ 00:05:26.367 END TEST event_perf 00:05:26.367 ************************************ 00:05:26.367 23:18:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.367 23:18:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.367 23:18:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:26.367 23:18:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.367 23:18:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.367 ************************************ 00:05:26.367 START TEST event_reactor 00:05:26.367 ************************************ 00:05:26.367 23:18:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:26.626 [2024-11-19 23:18:12.566631] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:26.626 [2024-11-19 23:18:12.566874] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70409 ] 00:05:26.626 [2024-11-19 23:18:12.727435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.626 [2024-11-19 23:18:12.747085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.999 test_start 00:05:27.999 oneshot 00:05:27.999 tick 100 00:05:27.999 tick 100 00:05:27.999 tick 250 00:05:27.999 tick 100 00:05:27.999 tick 100 00:05:27.999 tick 100 00:05:27.999 tick 250 00:05:27.999 tick 500 00:05:27.999 tick 100 00:05:27.999 tick 100 00:05:27.999 tick 250 00:05:27.999 tick 100 00:05:27.999 tick 100 00:05:27.999 test_end 00:05:27.999 00:05:27.999 real 0m1.254s 00:05:27.999 user 0m1.076s 00:05:27.999 sys 0m0.069s 00:05:28.000 23:18:13 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.000 ************************************ 00:05:28.000 23:18:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:28.000 END TEST event_reactor 00:05:28.000 ************************************ 00:05:28.000 23:18:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.000 23:18:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.000 23:18:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.000 23:18:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.000 ************************************ 00:05:28.000 START TEST event_reactor_perf 00:05:28.000 ************************************ 00:05:28.000 23:18:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:28.000 [2024-11-19 23:18:13.877798] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:28.000 [2024-11-19 23:18:13.877904] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70440 ] 00:05:28.000 [2024-11-19 23:18:14.035047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.000 [2024-11-19 23:18:14.053528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.936 test_start 00:05:28.936 test_end 00:05:28.936 Performance: 318176 events per second 00:05:28.936 00:05:28.936 real 0m1.242s 00:05:28.936 user 0m1.075s 00:05:28.936 sys 0m0.059s 00:05:28.936 23:18:15 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.936 ************************************ 00:05:28.936 END TEST event_reactor_perf 00:05:28.936 23:18:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.936 ************************************ 00:05:29.193 23:18:15 event -- event/event.sh@49 -- # uname -s 00:05:29.193 23:18:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:29.193 23:18:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.193 23:18:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.193 23:18:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.193 23:18:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.193 ************************************ 00:05:29.193 START TEST event_scheduler 00:05:29.193 ************************************ 00:05:29.193 23:18:15 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:29.193 * Looking for test storage... 00:05:29.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:29.193 23:18:15 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.193 23:18:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.193 23:18:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.193 23:18:15 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.193 23:18:15 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.194 23:18:15 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.194 --rc genhtml_branch_coverage=1 00:05:29.194 --rc genhtml_function_coverage=1 00:05:29.194 --rc genhtml_legend=1 00:05:29.194 --rc geninfo_all_blocks=1 00:05:29.194 --rc geninfo_unexecuted_blocks=1 00:05:29.194 00:05:29.194 ' 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.194 --rc genhtml_branch_coverage=1 00:05:29.194 --rc genhtml_function_coverage=1 00:05:29.194 --rc genhtml_legend=1 00:05:29.194 --rc geninfo_all_blocks=1 00:05:29.194 --rc geninfo_unexecuted_blocks=1 00:05:29.194 00:05:29.194 ' 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.194 --rc genhtml_branch_coverage=1 00:05:29.194 --rc genhtml_function_coverage=1 00:05:29.194 --rc genhtml_legend=1 00:05:29.194 --rc geninfo_all_blocks=1 00:05:29.194 --rc geninfo_unexecuted_blocks=1 00:05:29.194 00:05:29.194 ' 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.194 --rc genhtml_branch_coverage=1 00:05:29.194 --rc genhtml_function_coverage=1 00:05:29.194 --rc genhtml_legend=1 00:05:29.194 --rc geninfo_all_blocks=1 00:05:29.194 --rc geninfo_unexecuted_blocks=1 00:05:29.194 00:05:29.194 ' 00:05:29.194 23:18:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:29.194 23:18:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70510 00:05:29.194 23:18:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.194 23:18:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70510 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70510 ']' 00:05:29.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.194 23:18:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.194 23:18:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:29.194 [2024-11-19 23:18:15.370949] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:29.194 [2024-11-19 23:18:15.371066] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70510 ] 00:05:29.452 [2024-11-19 23:18:15.529483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.452 [2024-11-19 23:18:15.551417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.452 [2024-11-19 23:18:15.551698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.452 [2024-11-19 23:18:15.551903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.452 [2024-11-19 23:18:15.551978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:30.386 23:18:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.386 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.386 POWER: Cannot set governor of lcore 0 to performance 00:05:30.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.386 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.386 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:30.386 POWER: Cannot set governor of lcore 0 to userspace 00:05:30.386 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:30.386 POWER: Unable to set Power Management Environment for lcore 0 00:05:30.386 [2024-11-19 23:18:16.213252] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:30.386 [2024-11-19 23:18:16.213271] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:30.386 [2024-11-19 23:18:16.213280] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:30.386 [2024-11-19 23:18:16.213305] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:30.386 [2024-11-19 23:18:16.213325] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:30.386 [2024-11-19 23:18:16.213344] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.386 23:18:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.386 [2024-11-19 23:18:16.269393] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.386 23:18:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.386 23:18:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.386 ************************************ 00:05:30.386 START TEST scheduler_create_thread 00:05:30.386 ************************************ 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.386 2 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.386 3 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.386 4 00:05:30.386 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 5 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 6 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 7 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 8 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 9 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 10 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.387 23:18:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.758 23:18:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.758 23:18:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:31.758 23:18:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:31.758 23:18:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.758 23:18:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.135 23:18:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.135 00:05:33.135 real 0m2.609s 00:05:33.135 user 0m0.015s 00:05:33.135 sys 0m0.006s 00:05:33.135 23:18:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.135 ************************************ 00:05:33.135 END TEST scheduler_create_thread 00:05:33.135 ************************************ 00:05:33.135 23:18:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.135 23:18:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.135 23:18:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70510 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70510 ']' 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70510 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70510 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:33.135 killing process with pid 70510 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70510' 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70510 00:05:33.135 23:18:18 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70510 00:05:33.394 [2024-11-19 23:18:19.372596] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:33.394 00:05:33.394 real 0m4.349s 00:05:33.394 user 0m8.020s 00:05:33.394 sys 0m0.322s 00:05:33.394 ************************************ 00:05:33.394 END TEST event_scheduler 00:05:33.394 ************************************ 00:05:33.394 23:18:19 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.394 23:18:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.394 23:18:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:33.394 23:18:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:33.394 23:18:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.394 23:18:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.394 23:18:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.394 ************************************ 00:05:33.394 START TEST app_repeat 00:05:33.394 ************************************ 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:33.394 Process app_repeat pid: 70611 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70611 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70611' 00:05:33.394 spdk_app_start Round 0 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:33.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.394 23:18:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70611 /var/tmp/spdk-nbd.sock 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70611 ']' 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.394 23:18:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.652 [2024-11-19 23:18:19.605034] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:33.652 [2024-11-19 23:18:19.605140] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70611 ] 00:05:33.652 [2024-11-19 23:18:19.761698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.653 [2024-11-19 23:18:19.782165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.653 [2024-11-19 23:18:19.782201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.587 23:18:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.587 23:18:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.587 23:18:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.587 Malloc0 00:05:34.587 23:18:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.935 Malloc1 00:05:34.935 23:18:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.935 23:18:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.224 /dev/nbd0 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.224 1+0 records in 00:05:35.224 1+0 records out 00:05:35.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272698 s, 15.0 MB/s 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.224 /dev/nbd1 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.224 1+0 records in 00:05:35.224 1+0 records out 00:05:35.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424935 s, 9.6 MB/s 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.224 23:18:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.224 23:18:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.484 { 00:05:35.484 "nbd_device": "/dev/nbd0", 00:05:35.484 "bdev_name": "Malloc0" 00:05:35.484 }, 00:05:35.484 { 00:05:35.484 "nbd_device": "/dev/nbd1", 00:05:35.484 "bdev_name": "Malloc1" 00:05:35.484 } 00:05:35.484 ]' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.484 { 00:05:35.484 "nbd_device": "/dev/nbd0", 00:05:35.484 "bdev_name": "Malloc0" 00:05:35.484 }, 00:05:35.484 { 00:05:35.484 "nbd_device": "/dev/nbd1", 00:05:35.484 "bdev_name": "Malloc1" 00:05:35.484 } 00:05:35.484 ]' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.484 /dev/nbd1' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.484 /dev/nbd1' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.484 256+0 records in 00:05:35.484 256+0 records out 00:05:35.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00638928 s, 164 MB/s 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.484 256+0 records in 00:05:35.484 256+0 records out 00:05:35.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195494 s, 53.6 MB/s 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.484 256+0 records in 00:05:35.484 256+0 records out 00:05:35.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162061 s, 64.7 MB/s 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.484 23:18:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.742 23:18:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.742 23:18:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.742 23:18:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.743 23:18:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.003 23:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.263 23:18:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.263 23:18:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.522 23:18:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.522 [2024-11-19 23:18:22.687207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.522 [2024-11-19 23:18:22.708753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.522 [2024-11-19 23:18:22.708759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.781 [2024-11-19 23:18:22.743827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.781 [2024-11-19 23:18:22.744049] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.086 spdk_app_start Round 1 00:05:40.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.086 23:18:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.086 23:18:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.086 23:18:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70611 /var/tmp/spdk-nbd.sock 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70611 ']' 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.086 23:18:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.086 23:18:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.086 Malloc0 00:05:40.086 23:18:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.086 Malloc1 00:05:40.086 23:18:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.086 23:18:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.346 /dev/nbd0 00:05:40.346 23:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.346 23:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.346 1+0 records in 00:05:40.346 1+0 records out 00:05:40.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152995 s, 26.8 MB/s 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.346 23:18:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.347 23:18:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.347 23:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.347 23:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.347 23:18:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.608 /dev/nbd1 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.608 1+0 records in 00:05:40.608 1+0 records out 00:05:40.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149306 s, 27.4 MB/s 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.608 23:18:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.608 23:18:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.871 { 00:05:40.871 "nbd_device": "/dev/nbd0", 00:05:40.871 "bdev_name": "Malloc0" 00:05:40.871 }, 00:05:40.871 { 00:05:40.871 "nbd_device": "/dev/nbd1", 00:05:40.871 "bdev_name": "Malloc1" 00:05:40.871 } 00:05:40.871 ]' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.871 { 00:05:40.871 "nbd_device": "/dev/nbd0", 00:05:40.871 "bdev_name": "Malloc0" 00:05:40.871 }, 00:05:40.871 { 00:05:40.871 "nbd_device": "/dev/nbd1", 00:05:40.871 "bdev_name": "Malloc1" 00:05:40.871 } 00:05:40.871 ]' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.871 /dev/nbd1' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.871 /dev/nbd1' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.871 256+0 records in 00:05:40.871 256+0 records out 00:05:40.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974099 s, 108 MB/s 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.871 256+0 records in 00:05:40.871 256+0 records out 00:05:40.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133344 s, 78.6 MB/s 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.871 256+0 records in 00:05:40.871 256+0 records out 00:05:40.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149693 s, 70.0 MB/s 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.871 23:18:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.133 23:18:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.395 23:18:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.656 23:18:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.656 23:18:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.656 23:18:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.921 [2024-11-19 23:18:27.879605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.921 [2024-11-19 23:18:27.895833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.921 [2024-11-19 23:18:27.896027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.921 [2024-11-19 23:18:27.925255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.921 [2024-11-19 23:18:27.925301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.228 23:18:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.228 spdk_app_start Round 2 00:05:45.228 23:18:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.228 23:18:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70611 /var/tmp/spdk-nbd.sock 00:05:45.228 23:18:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70611 ']' 00:05:45.228 23:18:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.228 23:18:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.228 23:18:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.228 23:18:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.228 23:18:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.228 23:18:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.228 23:18:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.228 23:18:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.228 Malloc0 00:05:45.228 23:18:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.228 Malloc1 00:05:45.228 23:18:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.228 23:18:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.490 /dev/nbd0 00:05:45.490 23:18:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.490 23:18:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.490 1+0 records in 00:05:45.490 1+0 records out 00:05:45.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185827 s, 22.0 MB/s 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.490 23:18:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.490 23:18:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.490 23:18:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.490 23:18:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.750 /dev/nbd1 00:05:45.750 23:18:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.750 23:18:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.750 1+0 records in 00:05:45.750 1+0 records out 00:05:45.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164241 s, 24.9 MB/s 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.750 23:18:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.751 23:18:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.751 23:18:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.751 23:18:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.751 23:18:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.751 23:18:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.751 23:18:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.751 23:18:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.012 { 00:05:46.012 "nbd_device": "/dev/nbd0", 00:05:46.012 "bdev_name": "Malloc0" 00:05:46.012 }, 00:05:46.012 { 00:05:46.012 "nbd_device": "/dev/nbd1", 00:05:46.012 "bdev_name": "Malloc1" 00:05:46.012 } 00:05:46.012 ]' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.012 { 00:05:46.012 "nbd_device": "/dev/nbd0", 00:05:46.012 "bdev_name": "Malloc0" 00:05:46.012 }, 00:05:46.012 { 00:05:46.012 "nbd_device": "/dev/nbd1", 00:05:46.012 "bdev_name": "Malloc1" 00:05:46.012 } 00:05:46.012 ]' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.012 /dev/nbd1' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.012 /dev/nbd1' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.012 256+0 records in 00:05:46.012 256+0 records out 00:05:46.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010062 s, 104 MB/s 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.012 256+0 records in 00:05:46.012 256+0 records out 00:05:46.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133999 s, 78.3 MB/s 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.012 256+0 records in 00:05:46.012 256+0 records out 00:05:46.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181644 s, 57.7 MB/s 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.012 23:18:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.013 23:18:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.274 23:18:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.535 23:18:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.796 23:18:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.796 23:18:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.072 23:18:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.072 [2024-11-19 23:18:33.067511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.072 [2024-11-19 23:18:33.082676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.072 [2024-11-19 23:18:33.082679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.072 [2024-11-19 23:18:33.111567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.072 [2024-11-19 23:18:33.111613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.373 23:18:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70611 /var/tmp/spdk-nbd.sock 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70611 ']' 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.373 23:18:36 event.app_repeat -- event/event.sh@39 -- # killprocess 70611 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70611 ']' 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70611 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70611 00:05:50.373 killing process with pid 70611 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70611' 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70611 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70611 00:05:50.373 spdk_app_start is called in Round 0. 00:05:50.373 Shutdown signal received, stop current app iteration 00:05:50.373 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 reinitialization... 00:05:50.373 spdk_app_start is called in Round 1. 00:05:50.373 Shutdown signal received, stop current app iteration 00:05:50.373 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 reinitialization... 00:05:50.373 spdk_app_start is called in Round 2. 00:05:50.373 Shutdown signal received, stop current app iteration 00:05:50.373 Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 reinitialization... 00:05:50.373 spdk_app_start is called in Round 3. 00:05:50.373 Shutdown signal received, stop current app iteration 00:05:50.373 ************************************ 00:05:50.373 END TEST app_repeat 00:05:50.373 ************************************ 00:05:50.373 23:18:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.373 23:18:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.373 00:05:50.373 real 0m16.764s 00:05:50.373 user 0m37.524s 00:05:50.373 sys 0m2.018s 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.373 23:18:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.373 23:18:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.373 23:18:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:50.373 23:18:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.373 23:18:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.373 23:18:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.373 ************************************ 00:05:50.373 START TEST cpu_locks 00:05:50.373 ************************************ 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:50.373 * Looking for test storage... 00:05:50.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.373 23:18:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.373 --rc genhtml_branch_coverage=1 00:05:50.373 --rc genhtml_function_coverage=1 00:05:50.373 --rc genhtml_legend=1 00:05:50.373 --rc geninfo_all_blocks=1 00:05:50.373 --rc geninfo_unexecuted_blocks=1 00:05:50.373 00:05:50.373 ' 00:05:50.373 23:18:36 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.373 --rc genhtml_branch_coverage=1 00:05:50.373 --rc genhtml_function_coverage=1 00:05:50.373 --rc genhtml_legend=1 00:05:50.373 --rc geninfo_all_blocks=1 00:05:50.373 --rc geninfo_unexecuted_blocks=1 00:05:50.374 00:05:50.374 ' 00:05:50.374 23:18:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.374 --rc genhtml_branch_coverage=1 00:05:50.374 --rc genhtml_function_coverage=1 00:05:50.374 --rc genhtml_legend=1 00:05:50.374 --rc geninfo_all_blocks=1 00:05:50.374 --rc geninfo_unexecuted_blocks=1 00:05:50.374 00:05:50.374 ' 00:05:50.374 23:18:36 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.374 --rc genhtml_branch_coverage=1 00:05:50.374 --rc genhtml_function_coverage=1 00:05:50.374 --rc genhtml_legend=1 00:05:50.374 --rc geninfo_all_blocks=1 00:05:50.374 --rc geninfo_unexecuted_blocks=1 00:05:50.374 00:05:50.374 ' 00:05:50.374 23:18:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.374 23:18:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.374 23:18:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.374 23:18:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.374 23:18:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.374 23:18:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.374 23:18:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.374 ************************************ 00:05:50.374 START TEST default_locks 00:05:50.374 ************************************ 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71025 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71025 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71025 ']' 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.374 23:18:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 [2024-11-19 23:18:36.606415] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:50.632 [2024-11-19 23:18:36.606640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71025 ] 00:05:50.632 [2024-11-19 23:18:36.761100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.632 [2024-11-19 23:18:36.777561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71025 ']' 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.565 killing process with pid 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71025' 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71025 00:05:51.565 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71025 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71025 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71025 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71025 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71025 ']' 00:05:51.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.823 ERROR: process (pid: 71025) is no longer running 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71025) - No such process 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.823 ************************************ 00:05:51.823 END TEST default_locks 00:05:51.823 ************************************ 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.823 00:05:51.823 real 0m1.250s 00:05:51.823 user 0m1.262s 00:05:51.823 sys 0m0.353s 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.823 23:18:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 23:18:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:51.823 23:18:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.823 23:18:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.823 23:18:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 ************************************ 00:05:51.823 START TEST default_locks_via_rpc 00:05:51.823 ************************************ 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71072 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71072 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71072 ']' 00:05:51.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.823 23:18:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.823 [2024-11-19 23:18:37.908808] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:51.823 [2024-11-19 23:18:37.908926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71072 ] 00:05:52.081 [2024-11-19 23:18:38.064200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.081 [2024-11-19 23:18:38.080819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71072 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71072 00:05:52.657 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.915 23:18:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71072 00:05:52.915 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71072 ']' 00:05:52.915 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71072 00:05:52.915 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.915 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.915 23:18:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71072 00:05:52.915 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.915 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.915 killing process with pid 71072 00:05:52.915 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71072' 00:05:52.915 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71072 00:05:52.915 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71072 00:05:53.172 00:05:53.172 real 0m1.384s 00:05:53.172 user 0m1.441s 00:05:53.172 sys 0m0.394s 00:05:53.172 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.172 23:18:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.172 ************************************ 00:05:53.172 END TEST default_locks_via_rpc 00:05:53.173 ************************************ 00:05:53.173 23:18:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:53.173 23:18:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.173 23:18:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.173 23:18:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.173 ************************************ 00:05:53.173 START TEST non_locking_app_on_locked_coremask 00:05:53.173 ************************************ 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71119 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71119 /var/tmp/spdk.sock 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71119 ']' 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.173 23:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.173 [2024-11-19 23:18:39.352708] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:53.173 [2024-11-19 23:18:39.353120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71119 ] 00:05:53.430 [2024-11-19 23:18:39.509892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.430 [2024-11-19 23:18:39.528656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71135 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71135 /var/tmp/spdk2.sock 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71135 ']' 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:53.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.994 23:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.252 [2024-11-19 23:18:40.255750] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:54.252 [2024-11-19 23:18:40.256171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71135 ] 00:05:54.252 [2024-11-19 23:18:40.426558] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.252 [2024-11-19 23:18:40.426611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.512 [2024-11-19 23:18:40.465419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.077 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.077 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.077 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71119 00:05:55.077 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.077 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71119 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71119 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71119 ']' 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71119 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71119 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.335 killing process with pid 71119 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71119' 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71119 00:05:55.335 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71119 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71135 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71135 ']' 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71135 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71135 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.906 killing process with pid 71135 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71135' 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71135 00:05:55.906 23:18:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71135 00:05:56.164 00:05:56.164 real 0m2.827s 00:05:56.164 user 0m3.139s 00:05:56.164 sys 0m0.753s 00:05:56.164 23:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.164 23:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.164 ************************************ 00:05:56.164 END TEST non_locking_app_on_locked_coremask 00:05:56.164 ************************************ 00:05:56.164 23:18:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:56.164 23:18:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.164 23:18:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.164 23:18:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.164 ************************************ 00:05:56.164 START TEST locking_app_on_unlocked_coremask 00:05:56.164 ************************************ 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71193 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71193 /var/tmp/spdk.sock 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71193 ']' 00:05:56.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:56.164 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.165 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.165 23:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.165 [2024-11-19 23:18:42.228893] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:56.165 [2024-11-19 23:18:42.229003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71193 ] 00:05:56.423 [2024-11-19 23:18:42.378609] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.423 [2024-11-19 23:18:42.378643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.423 [2024-11-19 23:18:42.395088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71209 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71209 /var/tmp/spdk2.sock 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71209 ']' 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.989 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.989 [2024-11-19 23:18:43.087194] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:56.989 [2024-11-19 23:18:43.087303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71209 ] 00:05:57.247 [2024-11-19 23:18:43.250003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.247 [2024-11-19 23:18:43.282816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.813 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.813 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.813 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71209 00:05:57.813 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71209 00:05:57.813 23:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71193 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71193 ']' 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71193 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71193 00:05:58.070 killing process with pid 71193 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71193' 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71193 00:05:58.070 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71193 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71209 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71209 ']' 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71209 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71209 00:05:58.636 killing process with pid 71209 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71209' 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71209 00:05:58.636 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71209 00:05:58.895 ************************************ 00:05:58.895 END TEST locking_app_on_unlocked_coremask 00:05:58.895 ************************************ 00:05:58.895 00:05:58.895 real 0m2.681s 00:05:58.895 user 0m2.984s 00:05:58.895 sys 0m0.716s 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 23:18:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.895 23:18:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.895 23:18:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.895 23:18:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 ************************************ 00:05:58.895 START TEST locking_app_on_locked_coremask 00:05:58.895 ************************************ 00:05:58.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71260 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71260 /var/tmp/spdk.sock 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71260 ']' 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.895 23:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.895 [2024-11-19 23:18:44.969611] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:58.895 [2024-11-19 23:18:44.969867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71260 ] 00:05:59.153 [2024-11-19 23:18:45.128596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.153 [2024-11-19 23:18:45.147293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71272 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71272 /var/tmp/spdk2.sock 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71272 /var/tmp/spdk2.sock 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71272 /var/tmp/spdk2.sock 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71272 ']' 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.718 23:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.718 [2024-11-19 23:18:45.867867] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:05:59.719 [2024-11-19 23:18:45.868298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71272 ] 00:05:59.977 [2024-11-19 23:18:46.041598] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71260 has claimed it. 00:05:59.977 [2024-11-19 23:18:46.041647] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.542 ERROR: process (pid: 71272) is no longer running 00:06:00.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71272) - No such process 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71260 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71260 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71260 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71260 ']' 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71260 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71260 00:06:00.542 killing process with pid 71260 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71260' 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71260 00:06:00.542 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71260 00:06:00.800 00:06:00.800 real 0m2.031s 00:06:00.800 user 0m2.283s 00:06:00.800 sys 0m0.480s 00:06:00.800 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.800 ************************************ 00:06:00.800 END TEST locking_app_on_locked_coremask 00:06:00.800 ************************************ 00:06:00.800 23:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.800 23:18:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.800 23:18:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.800 23:18:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.800 23:18:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.800 ************************************ 00:06:00.800 START TEST locking_overlapped_coremask 00:06:00.800 ************************************ 00:06:00.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71318 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71318 /var/tmp/spdk.sock 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71318 ']' 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.800 23:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.058 [2024-11-19 23:18:47.040872] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:01.058 [2024-11-19 23:18:47.040978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71318 ] 00:06:01.058 [2024-11-19 23:18:47.200589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.058 [2024-11-19 23:18:47.220531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.058 [2024-11-19 23:18:47.220574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.058 [2024-11-19 23:18:47.220652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71332 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71332 /var/tmp/spdk2.sock 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71332 /var/tmp/spdk2.sock 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71332 /var/tmp/spdk2.sock 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71332 ']' 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.993 23:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.993 [2024-11-19 23:18:47.946337] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:01.993 [2024-11-19 23:18:47.946624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71332 ] 00:06:01.993 [2024-11-19 23:18:48.116068] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71318 has claimed it. 00:06:01.993 [2024-11-19 23:18:48.116123] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.562 ERROR: process (pid: 71332) is no longer running 00:06:02.562 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71332) - No such process 00:06:02.562 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.562 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:02.562 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71318 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71318 ']' 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71318 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71318 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71318' 00:06:02.563 killing process with pid 71318 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71318 00:06:02.563 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71318 00:06:02.824 00:06:02.824 real 0m1.893s 00:06:02.824 user 0m5.251s 00:06:02.824 sys 0m0.375s 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 ************************************ 00:06:02.824 END TEST locking_overlapped_coremask 00:06:02.824 ************************************ 00:06:02.824 23:18:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.824 23:18:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.824 23:18:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.824 23:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 ************************************ 00:06:02.824 START TEST locking_overlapped_coremask_via_rpc 00:06:02.824 ************************************ 00:06:02.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71374 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71374 /var/tmp/spdk.sock 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71374 ']' 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.824 23:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.824 [2024-11-19 23:18:48.979327] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:02.824 [2024-11-19 23:18:48.979581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71374 ] 00:06:03.082 [2024-11-19 23:18:49.137167] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.082 [2024-11-19 23:18:49.137313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.082 [2024-11-19 23:18:49.157471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.082 [2024-11-19 23:18:49.157716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.082 [2024-11-19 23:18:49.157719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71392 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71392 /var/tmp/spdk2.sock 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71392 ']' 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.659 23:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.920 [2024-11-19 23:18:49.872270] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:03.920 [2024-11-19 23:18:49.872387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71392 ] 00:06:03.920 [2024-11-19 23:18:50.047344] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.920 [2024-11-19 23:18:50.047400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.920 [2024-11-19 23:18:50.088241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.920 [2024-11-19 23:18:50.088252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.920 [2024-11-19 23:18:50.088301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:04.909 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.910 [2024-11-19 23:18:50.740926] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71374 has claimed it. 00:06:04.910 request: 00:06:04.910 { 00:06:04.910 "method": "framework_enable_cpumask_locks", 00:06:04.910 "req_id": 1 00:06:04.910 } 00:06:04.910 Got JSON-RPC error response 00:06:04.910 response: 00:06:04.910 { 00:06:04.910 "code": -32603, 00:06:04.910 "message": "Failed to claim CPU core: 2" 00:06:04.910 } 00:06:04.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71374 /var/tmp/spdk.sock 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71374 ']' 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71392 /var/tmp/spdk2.sock 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71392 ']' 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.910 23:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.167 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.167 ************************************ 00:06:05.167 END TEST locking_overlapped_coremask_via_rpc 00:06:05.167 ************************************ 00:06:05.167 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.167 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.167 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.167 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.168 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.168 00:06:05.168 real 0m2.270s 00:06:05.168 user 0m1.056s 00:06:05.168 sys 0m0.137s 00:06:05.168 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.168 23:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.168 23:18:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.168 23:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71374 ]] 00:06:05.168 23:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71374 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71374 ']' 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71374 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71374 00:06:05.168 killing process with pid 71374 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71374' 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71374 00:06:05.168 23:18:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71374 00:06:05.426 23:18:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71392 ]] 00:06:05.426 23:18:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71392 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71392 ']' 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71392 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71392 00:06:05.426 killing process with pid 71392 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71392' 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71392 00:06:05.426 23:18:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71392 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71374 ]] 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71374 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71374 ']' 00:06:05.695 Process with pid 71374 is not found 00:06:05.695 Process with pid 71392 is not found 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71374 00:06:05.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71374) - No such process 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71374 is not found' 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71392 ]] 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71392 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71392 ']' 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71392 00:06:05.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71392) - No such process 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71392 is not found' 00:06:05.695 23:18:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.695 00:06:05.695 real 0m15.330s 00:06:05.695 user 0m27.503s 00:06:05.695 sys 0m3.951s 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.695 23:18:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.695 ************************************ 00:06:05.695 END TEST cpu_locks 00:06:05.695 ************************************ 00:06:05.695 ************************************ 00:06:05.695 END TEST event 00:06:05.695 ************************************ 00:06:05.695 00:06:05.695 real 0m40.761s 00:06:05.695 user 1m19.454s 00:06:05.695 sys 0m6.762s 00:06:05.696 23:18:51 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.696 23:18:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.696 23:18:51 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.696 23:18:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.696 23:18:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.696 23:18:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.696 ************************************ 00:06:05.696 START TEST thread 00:06:05.696 ************************************ 00:06:05.696 23:18:51 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.696 * Looking for test storage... 00:06:05.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:05.696 23:18:51 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.696 23:18:51 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.696 23:18:51 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.955 23:18:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.955 23:18:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.955 23:18:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.955 23:18:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.955 23:18:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.955 23:18:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.955 23:18:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.955 23:18:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.955 23:18:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.955 23:18:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.955 23:18:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.955 23:18:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:05.955 23:18:51 thread -- scripts/common.sh@345 -- # : 1 00:06:05.955 23:18:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.955 23:18:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.955 23:18:51 thread -- scripts/common.sh@365 -- # decimal 1 00:06:05.955 23:18:51 thread -- scripts/common.sh@353 -- # local d=1 00:06:05.955 23:18:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.955 23:18:51 thread -- scripts/common.sh@355 -- # echo 1 00:06:05.955 23:18:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.955 23:18:51 thread -- scripts/common.sh@366 -- # decimal 2 00:06:05.955 23:18:51 thread -- scripts/common.sh@353 -- # local d=2 00:06:05.955 23:18:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.955 23:18:51 thread -- scripts/common.sh@355 -- # echo 2 00:06:05.955 23:18:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.955 23:18:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.955 23:18:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.955 23:18:51 thread -- scripts/common.sh@368 -- # return 0 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.955 --rc genhtml_branch_coverage=1 00:06:05.955 --rc genhtml_function_coverage=1 00:06:05.955 --rc genhtml_legend=1 00:06:05.955 --rc geninfo_all_blocks=1 00:06:05.955 --rc geninfo_unexecuted_blocks=1 00:06:05.955 00:06:05.955 ' 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.955 --rc genhtml_branch_coverage=1 00:06:05.955 --rc genhtml_function_coverage=1 00:06:05.955 --rc genhtml_legend=1 00:06:05.955 --rc geninfo_all_blocks=1 00:06:05.955 --rc geninfo_unexecuted_blocks=1 00:06:05.955 00:06:05.955 ' 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.955 --rc genhtml_branch_coverage=1 00:06:05.955 --rc genhtml_function_coverage=1 00:06:05.955 --rc genhtml_legend=1 00:06:05.955 --rc geninfo_all_blocks=1 00:06:05.955 --rc geninfo_unexecuted_blocks=1 00:06:05.955 00:06:05.955 ' 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.955 --rc genhtml_branch_coverage=1 00:06:05.955 --rc genhtml_function_coverage=1 00:06:05.955 --rc genhtml_legend=1 00:06:05.955 --rc geninfo_all_blocks=1 00:06:05.955 --rc geninfo_unexecuted_blocks=1 00:06:05.955 00:06:05.955 ' 00:06:05.955 23:18:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.955 23:18:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.955 ************************************ 00:06:05.955 START TEST thread_poller_perf 00:06:05.955 ************************************ 00:06:05.955 23:18:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.955 [2024-11-19 23:18:51.985065] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:05.955 [2024-11-19 23:18:51.985246] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71519 ] 00:06:05.955 [2024-11-19 23:18:52.132361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.213 [2024-11-19 23:18:52.151127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.213 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.156 [2024-11-19T23:18:53.348Z] ====================================== 00:06:07.156 [2024-11-19T23:18:53.348Z] busy:2611469364 (cyc) 00:06:07.156 [2024-11-19T23:18:53.348Z] total_run_count: 412000 00:06:07.156 [2024-11-19T23:18:53.348Z] tsc_hz: 2600000000 (cyc) 00:06:07.156 [2024-11-19T23:18:53.348Z] ====================================== 00:06:07.156 [2024-11-19T23:18:53.348Z] poller_cost: 6338 (cyc), 2437 (nsec) 00:06:07.156 00:06:07.156 ************************************ 00:06:07.156 END TEST thread_poller_perf 00:06:07.156 ************************************ 00:06:07.156 real 0m1.229s 00:06:07.156 user 0m1.071s 00:06:07.156 sys 0m0.053s 00:06:07.156 23:18:53 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.156 23:18:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.156 23:18:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.156 23:18:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:07.156 23:18:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.156 23:18:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.156 ************************************ 00:06:07.156 START TEST thread_poller_perf 00:06:07.156 ************************************ 00:06:07.156 23:18:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.156 [2024-11-19 23:18:53.254462] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:07.156 [2024-11-19 23:18:53.254567] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71561 ] 00:06:07.413 [2024-11-19 23:18:53.407015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.413 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.413 [2024-11-19 23:18:53.422872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.346 [2024-11-19T23:18:54.538Z] ====================================== 00:06:08.346 [2024-11-19T23:18:54.538Z] busy:2602441216 (cyc) 00:06:08.346 [2024-11-19T23:18:54.538Z] total_run_count: 5356000 00:06:08.346 [2024-11-19T23:18:54.538Z] tsc_hz: 2600000000 (cyc) 00:06:08.346 [2024-11-19T23:18:54.538Z] ====================================== 00:06:08.346 [2024-11-19T23:18:54.538Z] poller_cost: 485 (cyc), 186 (nsec) 00:06:08.346 00:06:08.346 real 0m1.231s 00:06:08.346 user 0m1.074s 00:06:08.346 sys 0m0.052s 00:06:08.346 ************************************ 00:06:08.346 END TEST thread_poller_perf 00:06:08.346 ************************************ 00:06:08.346 23:18:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.346 23:18:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.346 23:18:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.346 ************************************ 00:06:08.346 END TEST thread 00:06:08.346 ************************************ 00:06:08.346 00:06:08.346 real 0m2.694s 00:06:08.346 user 0m2.262s 00:06:08.346 sys 0m0.217s 00:06:08.346 23:18:54 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.346 23:18:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.346 23:18:54 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:08.346 23:18:54 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.346 23:18:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.346 23:18:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.346 23:18:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.346 ************************************ 00:06:08.346 START TEST app_cmdline 00:06:08.346 ************************************ 00:06:08.346 23:18:54 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.605 * Looking for test storage... 00:06:08.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.605 23:18:54 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.605 --rc genhtml_branch_coverage=1 00:06:08.605 --rc genhtml_function_coverage=1 00:06:08.605 --rc genhtml_legend=1 00:06:08.605 --rc geninfo_all_blocks=1 00:06:08.605 --rc geninfo_unexecuted_blocks=1 00:06:08.605 00:06:08.605 ' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.605 --rc genhtml_branch_coverage=1 00:06:08.605 --rc genhtml_function_coverage=1 00:06:08.605 --rc genhtml_legend=1 00:06:08.605 --rc geninfo_all_blocks=1 00:06:08.605 --rc geninfo_unexecuted_blocks=1 00:06:08.605 00:06:08.605 ' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.605 --rc genhtml_branch_coverage=1 00:06:08.605 --rc genhtml_function_coverage=1 00:06:08.605 --rc genhtml_legend=1 00:06:08.605 --rc geninfo_all_blocks=1 00:06:08.605 --rc geninfo_unexecuted_blocks=1 00:06:08.605 00:06:08.605 ' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.605 --rc genhtml_branch_coverage=1 00:06:08.605 --rc genhtml_function_coverage=1 00:06:08.605 --rc genhtml_legend=1 00:06:08.605 --rc geninfo_all_blocks=1 00:06:08.605 --rc geninfo_unexecuted_blocks=1 00:06:08.605 00:06:08.605 ' 00:06:08.605 23:18:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.605 23:18:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71639 00:06:08.605 23:18:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71639 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71639 ']' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.605 23:18:54 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.605 23:18:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:08.605 [2024-11-19 23:18:54.737903] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:08.605 [2024-11-19 23:18:54.738180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71639 ] 00:06:08.863 [2024-11-19 23:18:54.893108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.863 [2024-11-19 23:18:54.909718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.430 23:18:55 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.430 23:18:55 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:09.430 23:18:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:09.688 { 00:06:09.688 "version": "SPDK v25.01-pre git sha1 f22e807f1", 00:06:09.688 "fields": { 00:06:09.688 "major": 25, 00:06:09.688 "minor": 1, 00:06:09.688 "patch": 0, 00:06:09.688 "suffix": "-pre", 00:06:09.688 "commit": "f22e807f1" 00:06:09.688 } 00:06:09.688 } 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:09.688 23:18:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:09.688 23:18:55 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:09.947 request: 00:06:09.947 { 00:06:09.947 "method": "env_dpdk_get_mem_stats", 00:06:09.947 "req_id": 1 00:06:09.947 } 00:06:09.947 Got JSON-RPC error response 00:06:09.947 response: 00:06:09.947 { 00:06:09.947 "code": -32601, 00:06:09.947 "message": "Method not found" 00:06:09.947 } 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.947 23:18:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71639 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71639 ']' 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71639 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71639 00:06:09.947 killing process with pid 71639 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71639' 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@973 -- # kill 71639 00:06:09.947 23:18:55 app_cmdline -- common/autotest_common.sh@978 -- # wait 71639 00:06:10.206 ************************************ 00:06:10.206 END TEST app_cmdline 00:06:10.206 ************************************ 00:06:10.206 00:06:10.206 real 0m1.683s 00:06:10.206 user 0m2.007s 00:06:10.206 sys 0m0.371s 00:06:10.206 23:18:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.206 23:18:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.206 23:18:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.206 23:18:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.206 23:18:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.206 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.206 ************************************ 00:06:10.206 START TEST version 00:06:10.206 ************************************ 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:10.206 * Looking for test storage... 00:06:10.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.206 23:18:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.206 23:18:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.206 23:18:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.206 23:18:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.206 23:18:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.206 23:18:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.206 23:18:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.206 23:18:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.206 23:18:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.206 23:18:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.206 23:18:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.206 23:18:56 version -- scripts/common.sh@344 -- # case "$op" in 00:06:10.206 23:18:56 version -- scripts/common.sh@345 -- # : 1 00:06:10.206 23:18:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.206 23:18:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.206 23:18:56 version -- scripts/common.sh@365 -- # decimal 1 00:06:10.206 23:18:56 version -- scripts/common.sh@353 -- # local d=1 00:06:10.206 23:18:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.206 23:18:56 version -- scripts/common.sh@355 -- # echo 1 00:06:10.206 23:18:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.206 23:18:56 version -- scripts/common.sh@366 -- # decimal 2 00:06:10.206 23:18:56 version -- scripts/common.sh@353 -- # local d=2 00:06:10.206 23:18:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.206 23:18:56 version -- scripts/common.sh@355 -- # echo 2 00:06:10.206 23:18:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.206 23:18:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.206 23:18:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.206 23:18:56 version -- scripts/common.sh@368 -- # return 0 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.206 --rc genhtml_branch_coverage=1 00:06:10.206 --rc genhtml_function_coverage=1 00:06:10.206 --rc genhtml_legend=1 00:06:10.206 --rc geninfo_all_blocks=1 00:06:10.206 --rc geninfo_unexecuted_blocks=1 00:06:10.206 00:06:10.206 ' 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.206 --rc genhtml_branch_coverage=1 00:06:10.206 --rc genhtml_function_coverage=1 00:06:10.206 --rc genhtml_legend=1 00:06:10.206 --rc geninfo_all_blocks=1 00:06:10.206 --rc geninfo_unexecuted_blocks=1 00:06:10.206 00:06:10.206 ' 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.206 --rc genhtml_branch_coverage=1 00:06:10.206 --rc genhtml_function_coverage=1 00:06:10.206 --rc genhtml_legend=1 00:06:10.206 --rc geninfo_all_blocks=1 00:06:10.206 --rc geninfo_unexecuted_blocks=1 00:06:10.206 00:06:10.206 ' 00:06:10.206 23:18:56 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.206 --rc genhtml_branch_coverage=1 00:06:10.206 --rc genhtml_function_coverage=1 00:06:10.206 --rc genhtml_legend=1 00:06:10.206 --rc geninfo_all_blocks=1 00:06:10.206 --rc geninfo_unexecuted_blocks=1 00:06:10.206 00:06:10.206 ' 00:06:10.206 23:18:56 version -- app/version.sh@17 -- # get_header_version major 00:06:10.206 23:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:10.206 23:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.206 23:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.206 23:18:56 version -- app/version.sh@17 -- # major=25 00:06:10.206 23:18:56 version -- app/version.sh@18 -- # get_header_version minor 00:06:10.207 23:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.207 23:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:10.207 23:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.207 23:18:56 version -- app/version.sh@18 -- # minor=1 00:06:10.207 23:18:56 version -- app/version.sh@19 -- # get_header_version patch 00:06:10.207 23:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.207 23:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:10.207 23:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.207 23:18:56 version -- app/version.sh@19 -- # patch=0 00:06:10.465 23:18:56 version -- app/version.sh@20 -- # get_header_version suffix 00:06:10.465 23:18:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:10.465 23:18:56 version -- app/version.sh@14 -- # cut -f2 00:06:10.465 23:18:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:10.465 23:18:56 version -- app/version.sh@20 -- # suffix=-pre 00:06:10.465 23:18:56 version -- app/version.sh@22 -- # version=25.1 00:06:10.465 23:18:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:10.465 23:18:56 version -- app/version.sh@28 -- # version=25.1rc0 00:06:10.465 23:18:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:10.465 23:18:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:10.465 23:18:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:10.465 23:18:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:10.465 ************************************ 00:06:10.465 END TEST version 00:06:10.465 ************************************ 00:06:10.465 00:06:10.465 real 0m0.184s 00:06:10.465 user 0m0.128s 00:06:10.465 sys 0m0.084s 00:06:10.465 23:18:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.465 23:18:56 version -- common/autotest_common.sh@10 -- # set +x 00:06:10.465 23:18:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:10.465 23:18:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:10.465 23:18:56 -- spdk/autotest.sh@194 -- # uname -s 00:06:10.465 23:18:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:10.465 23:18:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.465 23:18:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:10.465 23:18:56 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:10.465 23:18:56 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:10.465 23:18:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.465 23:18:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.465 23:18:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.465 ************************************ 00:06:10.465 START TEST blockdev_nvme 00:06:10.465 ************************************ 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:10.465 * Looking for test storage... 00:06:10.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.465 23:18:56 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.465 --rc genhtml_branch_coverage=1 00:06:10.465 --rc genhtml_function_coverage=1 00:06:10.465 --rc genhtml_legend=1 00:06:10.465 --rc geninfo_all_blocks=1 00:06:10.465 --rc geninfo_unexecuted_blocks=1 00:06:10.465 00:06:10.465 ' 00:06:10.465 23:18:56 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.465 --rc genhtml_branch_coverage=1 00:06:10.465 --rc genhtml_function_coverage=1 00:06:10.465 --rc genhtml_legend=1 00:06:10.465 --rc geninfo_all_blocks=1 00:06:10.466 --rc geninfo_unexecuted_blocks=1 00:06:10.466 00:06:10.466 ' 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.466 --rc genhtml_branch_coverage=1 00:06:10.466 --rc genhtml_function_coverage=1 00:06:10.466 --rc genhtml_legend=1 00:06:10.466 --rc geninfo_all_blocks=1 00:06:10.466 --rc geninfo_unexecuted_blocks=1 00:06:10.466 00:06:10.466 ' 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.466 --rc genhtml_branch_coverage=1 00:06:10.466 --rc genhtml_function_coverage=1 00:06:10.466 --rc genhtml_legend=1 00:06:10.466 --rc geninfo_all_blocks=1 00:06:10.466 --rc geninfo_unexecuted_blocks=1 00:06:10.466 00:06:10.466 ' 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.466 23:18:56 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71800 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71800 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71800 ']' 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.466 23:18:56 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.466 23:18:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.724 [2024-11-19 23:18:56.688618] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:10.724 [2024-11-19 23:18:56.688902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71800 ] 00:06:10.724 [2024-11-19 23:18:56.844205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.724 [2024-11-19 23:18:56.860765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.290 23:18:57 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.290 23:18:57 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:11.290 23:18:57 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:11.290 23:18:57 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:11.290 23:18:57 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:11.290 23:18:57 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:11.290 23:18:57 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.548 23:18:57 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:11.548 23:18:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.548 23:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.806 23:18:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:11.806 23:18:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:11.807 23:18:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e5fbdfe0-65e3-4bb6-9ada-22eb41b7e941"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e5fbdfe0-65e3-4bb6-9ada-22eb41b7e941",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "71b5ec54-abb4-4110-ae1b-2f3d95a6c791"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "71b5ec54-abb4-4110-ae1b-2f3d95a6c791",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e4bb0a29-1940-4d92-8dba-d20a59b3ff8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e4bb0a29-1940-4d92-8dba-d20a59b3ff8a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ae0fae7d-286b-4641-9eea-5932fa6f4672"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae0fae7d-286b-4641-9eea-5932fa6f4672",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "727b3f49-c73d-412e-95c1-26795e78ea3a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "727b3f49-c73d-412e-95c1-26795e78ea3a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a7d23569-fc74-43e7-b565-cbbab54305c8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a7d23569-fc74-43e7-b565-cbbab54305c8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:11.807 23:18:57 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:11.807 23:18:57 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:11.807 23:18:57 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:11.807 23:18:57 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71800 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71800 ']' 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71800 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71800 00:06:11.807 killing process with pid 71800 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71800' 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71800 00:06:11.807 23:18:57 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71800 00:06:12.065 23:18:58 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:12.065 23:18:58 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:12.065 23:18:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:12.065 23:18:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.065 23:18:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.065 ************************************ 00:06:12.065 START TEST bdev_hello_world 00:06:12.065 ************************************ 00:06:12.065 23:18:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:12.324 [2024-11-19 23:18:58.276638] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:12.324 [2024-11-19 23:18:58.276773] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71862 ] 00:06:12.324 [2024-11-19 23:18:58.431953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.324 [2024-11-19 23:18:58.448755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.886 [2024-11-19 23:18:58.804216] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:12.886 [2024-11-19 23:18:58.804255] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:12.886 [2024-11-19 23:18:58.804269] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:12.886 [2024-11-19 23:18:58.805881] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:12.886 [2024-11-19 23:18:58.806249] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:12.886 [2024-11-19 23:18:58.806268] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:12.886 [2024-11-19 23:18:58.806527] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:12.886 00:06:12.886 [2024-11-19 23:18:58.806543] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:12.886 00:06:12.886 real 0m0.708s 00:06:12.886 user 0m0.484s 00:06:12.886 sys 0m0.122s 00:06:12.886 ************************************ 00:06:12.886 END TEST bdev_hello_world 00:06:12.886 ************************************ 00:06:12.886 23:18:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.886 23:18:58 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:12.886 23:18:58 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:12.886 23:18:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.886 23:18:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.886 23:18:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.886 ************************************ 00:06:12.886 START TEST bdev_bounds 00:06:12.886 ************************************ 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:12.887 Process bdevio pid: 71893 00:06:12.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71893 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71893' 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71893 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71893 ']' 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.887 23:18:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:12.887 [2024-11-19 23:18:59.054040] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:12.887 [2024-11-19 23:18:59.054286] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71893 ] 00:06:13.158 [2024-11-19 23:18:59.212798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.158 [2024-11-19 23:18:59.231272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.158 [2024-11-19 23:18:59.231503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.158 [2024-11-19 23:18:59.231584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.722 23:18:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.722 23:18:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:13.722 23:18:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:13.980 I/O targets: 00:06:13.981 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:13.981 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:13.981 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.981 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.981 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:13.981 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:13.981 00:06:13.981 00:06:13.981 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.981 http://cunit.sourceforge.net/ 00:06:13.981 00:06:13.981 00:06:13.981 Suite: bdevio tests on: Nvme3n1 00:06:13.981 Test: blockdev write read block ...passed 00:06:13.981 Test: blockdev write zeroes read block ...passed 00:06:13.981 Test: blockdev write zeroes read no split ...passed 00:06:13.981 Test: blockdev write zeroes read split ...passed 00:06:13.981 Test: blockdev write zeroes read split partial ...passed 00:06:13.981 Test: blockdev reset ...[2024-11-19 23:18:59.989339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:13.981 passed 00:06:13.981 Test: blockdev write read 8 blocks ...[2024-11-19 23:18:59.991318] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:13.981 passed 00:06:13.981 Test: blockdev write read size > 128k ...passed 00:06:13.981 Test: blockdev write read invalid size ...passed 00:06:13.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.981 Test: blockdev write read max offset ...passed 00:06:13.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.981 Test: blockdev writev readv 8 blocks ...passed 00:06:13.981 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.981 Test: blockdev writev readv block ...passed 00:06:13.981 Test: blockdev writev readv size > 128k ...passed 00:06:13.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.981 Test: blockdev comparev and writev ...[2024-11-19 23:18:59.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc06000 len:0x1000 00:06:13.981 [2024-11-19 23:18:59.996213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev nvme passthru rw ...passed 00:06:13.981 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.981 Test: blockdev nvme admin passthru ...[2024-11-19 23:18:59.996702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.981 [2024-11-19 23:18:59.996747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev copy ...passed 00:06:13.981 Suite: bdevio tests on: Nvme2n3 00:06:13.981 Test: blockdev write read block ...passed 00:06:13.981 Test: blockdev write zeroes read block ...passed 00:06:13.981 Test: blockdev write zeroes read no split ...passed 00:06:13.981 Test: blockdev write zeroes read split ...passed 00:06:13.981 Test: blockdev write zeroes read split partial ...passed 00:06:13.981 Test: blockdev reset ...[2024-11-19 23:19:00.011222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:13.981 passed 00:06:13.981 Test: blockdev write read 8 blocks ...[2024-11-19 23:19:00.013231] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:13.981 passed 00:06:13.981 Test: blockdev write read size > 128k ...passed 00:06:13.981 Test: blockdev write read invalid size ...passed 00:06:13.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.981 Test: blockdev write read max offset ...passed 00:06:13.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.981 Test: blockdev writev readv 8 blocks ...passed 00:06:13.981 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.981 Test: blockdev writev readv block ...passed 00:06:13.981 Test: blockdev writev readv size > 128k ...passed 00:06:13.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.981 Test: blockdev comparev and writev ...[2024-11-19 23:19:00.017208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x301205000 len:0x1000 00:06:13.981 [2024-11-19 23:19:00.017249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev nvme passthru rw ...passed 00:06:13.981 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.981 Test: blockdev nvme admin passthru ...[2024-11-19 23:19:00.017771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.981 [2024-11-19 23:19:00.017794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev copy ...passed 00:06:13.981 Suite: bdevio tests on: Nvme2n2 00:06:13.981 Test: blockdev write read block ...passed 00:06:13.981 Test: blockdev write zeroes read block ...passed 00:06:13.981 Test: blockdev write zeroes read no split ...passed 00:06:13.981 Test: blockdev write zeroes read split ...passed 00:06:13.981 Test: blockdev write zeroes read split partial ...passed 00:06:13.981 Test: blockdev reset ...[2024-11-19 23:19:00.032426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:13.981 passed 00:06:13.981 Test: blockdev write read 8 blocks ...[2024-11-19 23:19:00.034142] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:13.981 passed 00:06:13.981 Test: blockdev write read size > 128k ...passed 00:06:13.981 Test: blockdev write read invalid size ...passed 00:06:13.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.981 Test: blockdev write read max offset ...passed 00:06:13.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.981 Test: blockdev writev readv 8 blocks ...passed 00:06:13.981 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.981 Test: blockdev writev readv block ...passed 00:06:13.981 Test: blockdev writev readv size > 128k ...passed 00:06:13.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.981 Test: blockdev comparev and writev ...[2024-11-19 23:19:00.037823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e2036000 len:0x1000 00:06:13.981 [2024-11-19 23:19:00.037865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev nvme passthru rw ...passed 00:06:13.981 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.981 Test: blockdev nvme admin passthru ...[2024-11-19 23:19:00.038338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.981 [2024-11-19 23:19:00.038360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev copy ...passed 00:06:13.981 Suite: bdevio tests on: Nvme2n1 00:06:13.981 Test: blockdev write read block ...passed 00:06:13.981 Test: blockdev write zeroes read block ...passed 00:06:13.981 Test: blockdev write zeroes read no split ...passed 00:06:13.981 Test: blockdev write zeroes read split ...passed 00:06:13.981 Test: blockdev write zeroes read split partial ...passed 00:06:13.981 Test: blockdev reset ...[2024-11-19 23:19:00.053534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:13.981 passed 00:06:13.981 Test: blockdev write read 8 blocks ...[2024-11-19 23:19:00.055542] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:13.981 passed 00:06:13.981 Test: blockdev write read size > 128k ...passed 00:06:13.981 Test: blockdev write read invalid size ...passed 00:06:13.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.981 Test: blockdev write read max offset ...passed 00:06:13.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.981 Test: blockdev writev readv 8 blocks ...passed 00:06:13.981 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.981 Test: blockdev writev readv block ...passed 00:06:13.981 Test: blockdev writev readv size > 128k ...passed 00:06:13.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.981 Test: blockdev comparev and writev ...[2024-11-19 23:19:00.059920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:13.981 Test: blockdev nvme passthru rw ...passed 00:06:13.981 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.981 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2e2030000 len:0x1000 00:06:13.981 [2024-11-19 23:19:00.060055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.981 [2024-11-19 23:19:00.060481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.981 [2024-11-19 23:19:00.060505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.981 passed 00:06:13.981 Test: blockdev copy ...passed 00:06:13.981 Suite: bdevio tests on: Nvme1n1 00:06:13.981 Test: blockdev write read block ...passed 00:06:13.981 Test: blockdev write zeroes read block ...passed 00:06:13.981 Test: blockdev write zeroes read no split ...passed 00:06:13.981 Test: blockdev write zeroes read split ...passed 00:06:13.981 Test: blockdev write zeroes read split partial ...passed 00:06:13.981 Test: blockdev reset ...[2024-11-19 23:19:00.075159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:13.981 [2024-11-19 23:19:00.076639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:13.981 Test: blockdev write read 8 blocks ...passed 00:06:13.981 Test: blockdev write read size > 128k ...uccessful. 00:06:13.981 passed 00:06:13.981 Test: blockdev write read invalid size ...passed 00:06:13.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.981 Test: blockdev write read max offset ...passed 00:06:13.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.982 Test: blockdev writev readv 8 blocks ...passed 00:06:13.982 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.982 Test: blockdev writev readv block ...passed 00:06:13.982 Test: blockdev writev readv size > 128k ...passed 00:06:13.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.982 Test: blockdev comparev and writev ...[2024-11-19 23:19:00.080303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e202c000 len:0x1000 00:06:13.982 [2024-11-19 23:19:00.080339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:13.982 passed 00:06:13.982 Test: blockdev nvme passthru rw ...passed 00:06:13.982 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.982 Test: blockdev nvme admin passthru ...[2024-11-19 23:19:00.080830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:13.982 [2024-11-19 23:19:00.080855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:13.982 passed 00:06:13.982 Test: blockdev copy ...passed 00:06:13.982 Suite: bdevio tests on: Nvme0n1 00:06:13.982 Test: blockdev write read block ...passed 00:06:13.982 Test: blockdev write zeroes read block ...passed 00:06:13.982 Test: blockdev write zeroes read no split ...passed 00:06:13.982 Test: blockdev write zeroes read split ...passed 00:06:13.982 Test: blockdev write zeroes read split partial ...passed 00:06:13.982 Test: blockdev reset ...[2024-11-19 23:19:00.096711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:13.982 passed 00:06:13.982 Test: blockdev write read 8 blocks ...[2024-11-19 23:19:00.098266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:13.982 passed 00:06:13.982 Test: blockdev write read size > 128k ...passed 00:06:13.982 Test: blockdev write read invalid size ...passed 00:06:13.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.982 Test: blockdev write read max offset ...passed 00:06:13.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.982 Test: blockdev writev readv 8 blocks ...passed 00:06:13.982 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.982 Test: blockdev writev readv block ...passed 00:06:13.982 Test: blockdev writev readv size > 128k ...passed 00:06:13.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.982 Test: blockdev comparev and writev ...[2024-11-19 23:19:00.101922] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:13.982 separate metadata which is not supported yet. 00:06:13.982 passed 00:06:13.982 Test: blockdev nvme passthru rw ...passed 00:06:13.982 Test: blockdev nvme passthru vendor specific ...passed 00:06:13.982 Test: blockdev nvme admin passthru ...[2024-11-19 23:19:00.102423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:13.982 [2024-11-19 23:19:00.102458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:13.982 passed 00:06:13.982 Test: blockdev copy ...passed 00:06:13.982 00:06:13.982 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.982 suites 6 6 n/a 0 0 00:06:13.982 tests 138 138 138 0 0 00:06:13.982 asserts 893 893 893 0 n/a 00:06:13.982 00:06:13.982 Elapsed time = 0.299 seconds 00:06:13.982 0 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71893 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71893 ']' 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71893 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71893 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71893' 00:06:13.982 killing process with pid 71893 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71893 00:06:13.982 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71893 00:06:14.240 23:19:00 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:14.240 00:06:14.240 real 0m1.297s 00:06:14.240 user 0m3.350s 00:06:14.240 sys 0m0.243s 00:06:14.240 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.240 23:19:00 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.240 ************************************ 00:06:14.240 END TEST bdev_bounds 00:06:14.240 ************************************ 00:06:14.240 23:19:00 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:14.240 23:19:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:14.240 23:19:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.240 23:19:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.240 ************************************ 00:06:14.240 START TEST bdev_nbd 00:06:14.240 ************************************ 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:14.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:14.240 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71942 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71942 /var/tmp/spdk-nbd.sock 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71942 ']' 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:14.241 23:19:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.241 [2024-11-19 23:19:00.391572] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:14.241 [2024-11-19 23:19:00.391812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.594 [2024-11-19 23:19:00.543670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.594 [2024-11-19 23:19:00.561857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.162 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.422 1+0 records in 00:06:15.422 1+0 records out 00:06:15.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368053 s, 11.1 MB/s 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.422 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:15.689 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:15.689 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:15.689 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:15.689 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.689 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.689 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.690 1+0 records in 00:06:15.690 1+0 records out 00:06:15.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313313 s, 13.1 MB/s 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.690 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.950 1+0 records in 00:06:15.950 1+0 records out 00:06:15.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525714 s, 7.8 MB/s 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.950 23:19:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.208 1+0 records in 00:06:16.208 1+0 records out 00:06:16.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471499 s, 8.7 MB/s 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.208 1+0 records in 00:06:16.208 1+0 records out 00:06:16.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420966 s, 9.7 MB/s 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.208 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.467 1+0 records in 00:06:16.467 1+0 records out 00:06:16.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463219 s, 8.8 MB/s 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.467 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd0", 00:06:16.726 "bdev_name": "Nvme0n1" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd1", 00:06:16.726 "bdev_name": "Nvme1n1" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd2", 00:06:16.726 "bdev_name": "Nvme2n1" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd3", 00:06:16.726 "bdev_name": "Nvme2n2" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd4", 00:06:16.726 "bdev_name": "Nvme2n3" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd5", 00:06:16.726 "bdev_name": "Nvme3n1" 00:06:16.726 } 00:06:16.726 ]' 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd0", 00:06:16.726 "bdev_name": "Nvme0n1" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd1", 00:06:16.726 "bdev_name": "Nvme1n1" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd2", 00:06:16.726 "bdev_name": "Nvme2n1" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd3", 00:06:16.726 "bdev_name": "Nvme2n2" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd4", 00:06:16.726 "bdev_name": "Nvme2n3" 00:06:16.726 }, 00:06:16.726 { 00:06:16.726 "nbd_device": "/dev/nbd5", 00:06:16.726 "bdev_name": "Nvme3n1" 00:06:16.726 } 00:06:16.726 ]' 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.726 23:19:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.985 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.243 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.501 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.759 23:19:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.018 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.277 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:18.277 /dev/nbd0 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.536 1+0 records in 00:06:18.536 1+0 records out 00:06:18.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578028 s, 7.1 MB/s 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.536 /dev/nbd1 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.536 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.794 1+0 records in 00:06:18.794 1+0 records out 00:06:18.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414931 s, 9.9 MB/s 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:18.794 /dev/nbd10 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.794 1+0 records in 00:06:18.794 1+0 records out 00:06:18.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038663 s, 10.6 MB/s 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.794 23:19:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:19.053 /dev/nbd11 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.053 1+0 records in 00:06:19.053 1+0 records out 00:06:19.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506431 s, 8.1 MB/s 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.053 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:19.311 /dev/nbd12 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.311 1+0 records in 00:06:19.311 1+0 records out 00:06:19.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368883 s, 11.1 MB/s 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.311 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:19.569 /dev/nbd13 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.569 1+0 records in 00:06:19.569 1+0 records out 00:06:19.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607825 s, 6.7 MB/s 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.569 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd0", 00:06:19.827 "bdev_name": "Nvme0n1" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd1", 00:06:19.827 "bdev_name": "Nvme1n1" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd10", 00:06:19.827 "bdev_name": "Nvme2n1" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd11", 00:06:19.827 "bdev_name": "Nvme2n2" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd12", 00:06:19.827 "bdev_name": "Nvme2n3" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd13", 00:06:19.827 "bdev_name": "Nvme3n1" 00:06:19.827 } 00:06:19.827 ]' 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd0", 00:06:19.827 "bdev_name": "Nvme0n1" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd1", 00:06:19.827 "bdev_name": "Nvme1n1" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd10", 00:06:19.827 "bdev_name": "Nvme2n1" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd11", 00:06:19.827 "bdev_name": "Nvme2n2" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd12", 00:06:19.827 "bdev_name": "Nvme2n3" 00:06:19.827 }, 00:06:19.827 { 00:06:19.827 "nbd_device": "/dev/nbd13", 00:06:19.827 "bdev_name": "Nvme3n1" 00:06:19.827 } 00:06:19.827 ]' 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.827 /dev/nbd1 00:06:19.827 /dev/nbd10 00:06:19.827 /dev/nbd11 00:06:19.827 /dev/nbd12 00:06:19.827 /dev/nbd13' 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.827 /dev/nbd1 00:06:19.827 /dev/nbd10 00:06:19.827 /dev/nbd11 00:06:19.827 /dev/nbd12 00:06:19.827 /dev/nbd13' 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.827 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:19.828 256+0 records in 00:06:19.828 256+0 records out 00:06:19.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00769107 s, 136 MB/s 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.828 256+0 records in 00:06:19.828 256+0 records out 00:06:19.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0515754 s, 20.3 MB/s 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.828 23:19:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.087 256+0 records in 00:06:20.087 256+0 records out 00:06:20.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0511176 s, 20.5 MB/s 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:20.087 256+0 records in 00:06:20.087 256+0 records out 00:06:20.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0506758 s, 20.7 MB/s 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:20.087 256+0 records in 00:06:20.087 256+0 records out 00:06:20.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0507633 s, 20.7 MB/s 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:20.087 256+0 records in 00:06:20.087 256+0 records out 00:06:20.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0527896 s, 19.9 MB/s 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.087 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:20.348 256+0 records in 00:06:20.348 256+0 records out 00:06:20.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.064698 s, 16.2 MB/s 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.348 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.609 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.867 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.868 23:19:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.128 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.388 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.649 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.650 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.910 23:19:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:21.910 malloc_lvol_verify 00:06:21.910 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:22.249 12768fc9-3551-4397-b1c5-7fdba95141ee 00:06:22.249 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.508 7a9929a7-8605-4606-b3c3-8ab083030b85 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.508 /dev/nbd0 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.508 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.508 Discarding device blocks: 0/4096 done 00:06:22.508 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.508 00:06:22.508 Allocating group tables: 0/1 done 00:06:22.508 Writing inode tables: 0/1 done 00:06:22.508 Creating journal (1024 blocks): done 00:06:22.508 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.508 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.508 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71942 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71942 ']' 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71942 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71942 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.769 killing process with pid 71942 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71942' 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71942 00:06:22.769 23:19:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71942 00:06:23.031 23:19:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.031 00:06:23.031 real 0m8.725s 00:06:23.031 user 0m12.929s 00:06:23.031 sys 0m2.857s 00:06:23.031 23:19:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.031 23:19:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.031 ************************************ 00:06:23.031 END TEST bdev_nbd 00:06:23.031 ************************************ 00:06:23.031 23:19:09 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:23.031 23:19:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:23.031 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.031 23:19:09 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.031 23:19:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.031 23:19:09 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.031 23:19:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.031 23:19:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.031 23:19:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.031 ************************************ 00:06:23.031 START TEST bdev_verify 00:06:23.031 ************************************ 00:06:23.031 23:19:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.031 [2024-11-19 23:19:09.154784] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:23.031 [2024-11-19 23:19:09.154893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72303 ] 00:06:23.292 [2024-11-19 23:19:09.309036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.292 [2024-11-19 23:19:09.328180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.292 [2024-11-19 23:19:09.328275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.553 Running I/O for 5 seconds... 00:06:25.873 19776.00 IOPS, 77.25 MiB/s [2024-11-19T23:19:13.017Z] 20576.00 IOPS, 80.38 MiB/s [2024-11-19T23:19:13.960Z] 21333.33 IOPS, 83.33 MiB/s [2024-11-19T23:19:14.907Z] 21632.00 IOPS, 84.50 MiB/s [2024-11-19T23:19:14.907Z] 21593.60 IOPS, 84.35 MiB/s 00:06:28.715 Latency(us) 00:06:28.715 [2024-11-19T23:19:14.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.715 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x0 length 0xbd0bd 00:06:28.715 Nvme0n1 : 5.08 1790.73 7.00 0.00 0.00 71317.99 14317.10 72997.02 00:06:28.715 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:28.715 Nvme0n1 : 5.10 1783.01 6.96 0.00 0.00 71037.54 8116.38 67350.84 00:06:28.715 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x0 length 0xa0000 00:06:28.715 Nvme1n1 : 5.08 1790.21 6.99 0.00 0.00 71265.05 15526.99 65737.65 00:06:28.715 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0xa0000 length 0xa0000 00:06:28.715 Nvme1n1 : 5.09 1773.25 6.93 0.00 0.00 71274.59 8670.92 65737.65 00:06:28.715 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x0 length 0x80000 00:06:28.715 Nvme2n1 : 5.08 1789.14 6.99 0.00 0.00 71190.89 16333.59 64124.46 00:06:28.715 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x80000 length 0x80000 00:06:28.715 Nvme2n1 : 5.06 1772.25 6.92 0.00 0.00 71953.35 13510.50 64931.05 00:06:28.715 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x0 length 0x80000 00:06:28.715 Nvme2n2 : 5.08 1788.65 6.99 0.00 0.00 71096.60 16535.24 66947.54 00:06:28.715 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x80000 length 0x80000 00:06:28.715 Nvme2n2 : 5.06 1771.69 6.92 0.00 0.00 71834.45 14821.22 58881.58 00:06:28.715 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x0 length 0x80000 00:06:28.715 Nvme2n3 : 5.08 1788.16 6.98 0.00 0.00 71001.64 12905.55 70577.23 00:06:28.715 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x80000 length 0x80000 00:06:28.715 Nvme2n3 : 5.08 1775.34 6.93 0.00 0.00 71442.52 8015.56 60091.47 00:06:28.715 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x0 length 0x20000 00:06:28.715 Nvme3n1 : 5.08 1787.35 6.98 0.00 0.00 70902.32 7662.67 72593.72 00:06:28.715 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:28.715 Verification LBA range: start 0x20000 length 0x20000 00:06:28.715 Nvme3n1 : 5.09 1774.43 6.93 0.00 0.00 71320.26 9326.28 63317.86 00:06:28.715 [2024-11-19T23:19:14.907Z] =================================================================================================================== 00:06:28.715 [2024-11-19T23:19:14.907Z] Total : 21384.22 83.53 0.00 0.00 71301.63 7662.67 72997.02 00:06:29.660 00:06:29.660 real 0m6.671s 00:06:29.660 user 0m12.599s 00:06:29.660 sys 0m0.220s 00:06:29.660 23:19:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.660 23:19:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:29.660 ************************************ 00:06:29.660 END TEST bdev_verify 00:06:29.660 ************************************ 00:06:29.660 23:19:15 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:29.660 23:19:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:29.660 23:19:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.660 23:19:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.660 ************************************ 00:06:29.660 START TEST bdev_verify_big_io 00:06:29.660 ************************************ 00:06:29.660 23:19:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:29.921 [2024-11-19 23:19:15.915401] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:29.921 [2024-11-19 23:19:15.915553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72396 ] 00:06:29.921 [2024-11-19 23:19:16.078613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.182 [2024-11-19 23:19:16.114887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.182 [2024-11-19 23:19:16.114902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.443 Running I/O for 5 seconds... 00:06:34.902 1144.00 IOPS, 71.50 MiB/s [2024-11-19T23:19:22.477Z] 1828.00 IOPS, 114.25 MiB/s [2024-11-19T23:19:22.477Z] 2184.67 IOPS, 136.54 MiB/s [2024-11-19T23:19:22.739Z] 2448.75 IOPS, 153.05 MiB/s 00:06:36.547 Latency(us) 00:06:36.547 [2024-11-19T23:19:22.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.547 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x0 length 0xbd0b 00:06:36.547 Nvme0n1 : 5.52 139.03 8.69 0.00 0.00 894191.20 29239.14 967916.31 00:06:36.547 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:36.547 Nvme0n1 : 5.71 128.82 8.05 0.00 0.00 964589.23 18854.20 967916.31 00:06:36.547 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x0 length 0xa000 00:06:36.547 Nvme1n1 : 5.53 138.99 8.69 0.00 0.00 865375.84 88322.36 800144.15 00:06:36.547 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0xa000 length 0xa000 00:06:36.547 Nvme1n1 : 5.78 129.82 8.11 0.00 0.00 926459.91 72190.42 803370.54 00:06:36.547 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x0 length 0x8000 00:06:36.547 Nvme2n1 : 5.68 146.50 9.16 0.00 0.00 803320.12 39523.25 806596.92 00:06:36.547 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x8000 length 0x8000 00:06:36.547 Nvme2n1 : 5.78 132.89 8.31 0.00 0.00 884617.32 62107.96 816276.09 00:06:36.547 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x0 length 0x8000 00:06:36.547 Nvme2n2 : 5.73 150.86 9.43 0.00 0.00 758857.46 29844.09 822728.86 00:06:36.547 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x8000 length 0x8000 00:06:36.547 Nvme2n2 : 5.81 132.78 8.30 0.00 0.00 854546.34 63317.86 838860.80 00:06:36.547 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x0 length 0x8000 00:06:36.547 Nvme2n3 : 5.74 156.23 9.76 0.00 0.00 714477.55 22887.19 845313.58 00:06:36.547 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x8000 length 0x8000 00:06:36.547 Nvme2n3 : 5.85 141.81 8.86 0.00 0.00 781383.23 29037.49 858219.13 00:06:36.547 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x0 length 0x2000 00:06:36.547 Nvme3n1 : 5.84 186.43 11.65 0.00 0.00 583407.30 263.09 864671.90 00:06:36.547 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:36.547 Verification LBA range: start 0x2000 length 0x2000 00:06:36.547 Nvme3n1 : 5.87 156.72 9.79 0.00 0.00 688707.98 1676.21 1387346.71 00:06:36.547 [2024-11-19T23:19:22.739Z] =================================================================================================================== 00:06:36.547 [2024-11-19T23:19:22.739Z] Total : 1740.87 108.80 0.00 0.00 798268.32 263.09 1387346.71 00:06:37.120 00:06:37.120 real 0m7.255s 00:06:37.120 user 0m13.652s 00:06:37.120 sys 0m0.287s 00:06:37.120 23:19:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.120 23:19:23 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:37.120 ************************************ 00:06:37.120 END TEST bdev_verify_big_io 00:06:37.120 ************************************ 00:06:37.120 23:19:23 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.120 23:19:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:37.120 23:19:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.120 23:19:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.120 ************************************ 00:06:37.120 START TEST bdev_write_zeroes 00:06:37.120 ************************************ 00:06:37.120 23:19:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:37.120 [2024-11-19 23:19:23.210411] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:37.120 [2024-11-19 23:19:23.210538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:06:37.398 [2024-11-19 23:19:23.368356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.398 [2024-11-19 23:19:23.391393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.664 Running I/O for 1 seconds... 00:06:39.567 41695.00 IOPS, 162.87 MiB/s 00:06:39.568 Latency(us) 00:06:39.568 [2024-11-19T23:19:25.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.568 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.568 Nvme0n1 : 1.62 4238.31 16.56 0.00 0.00 25813.00 4915.20 780785.82 00:06:39.568 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.568 Nvme1n1 : 1.14 6232.81 24.35 0.00 0.00 20457.53 7108.14 282308.92 00:06:39.568 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.568 Nvme2n1 : 1.14 6252.76 24.42 0.00 0.00 20385.32 7259.37 279082.54 00:06:39.568 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.568 Nvme2n2 : 1.14 6246.03 24.40 0.00 0.00 20384.45 7259.37 277469.34 00:06:39.568 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.568 Nvme2n3 : 1.14 6239.35 24.37 0.00 0.00 20390.27 7360.20 277469.34 00:06:39.568 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.568 Nvme3n1 : 1.21 5934.80 23.18 0.00 0.00 20737.66 7108.14 296827.67 00:06:39.568 [2024-11-19T23:19:25.760Z] =================================================================================================================== 00:06:39.568 [2024-11-19T23:19:25.760Z] Total : 35144.05 137.28 0.00 0.00 21336.85 4915.20 780785.82 00:06:39.568 00:06:39.568 real 0m2.490s 00:06:39.568 user 0m2.213s 00:06:39.568 sys 0m0.163s 00:06:39.568 23:19:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.568 23:19:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:39.568 ************************************ 00:06:39.568 END TEST bdev_write_zeroes 00:06:39.568 ************************************ 00:06:39.568 23:19:25 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.568 23:19:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:39.568 23:19:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.568 23:19:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.568 ************************************ 00:06:39.568 START TEST bdev_json_nonenclosed 00:06:39.568 ************************************ 00:06:39.568 23:19:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.828 [2024-11-19 23:19:25.775240] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:39.828 [2024-11-19 23:19:25.775382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72538 ] 00:06:39.828 [2024-11-19 23:19:25.937721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.828 [2024-11-19 23:19:25.970025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.828 [2024-11-19 23:19:25.970143] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:39.828 [2024-11-19 23:19:25.970160] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:39.828 [2024-11-19 23:19:25.970173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.087 00:06:40.087 real 0m0.347s 00:06:40.087 user 0m0.137s 00:06:40.087 sys 0m0.105s 00:06:40.087 23:19:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.087 23:19:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:40.087 ************************************ 00:06:40.087 END TEST bdev_json_nonenclosed 00:06:40.087 ************************************ 00:06:40.087 23:19:26 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.087 23:19:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.087 23:19:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.087 23:19:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.087 ************************************ 00:06:40.087 START TEST bdev_json_nonarray 00:06:40.087 ************************************ 00:06:40.087 23:19:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.087 [2024-11-19 23:19:26.174147] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:40.087 [2024-11-19 23:19:26.174289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72569 ] 00:06:40.386 [2024-11-19 23:19:26.337894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.386 [2024-11-19 23:19:26.369283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.386 [2024-11-19 23:19:26.369409] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:40.386 [2024-11-19 23:19:26.369428] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.386 [2024-11-19 23:19:26.369441] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.386 00:06:40.386 real 0m0.339s 00:06:40.386 user 0m0.136s 00:06:40.386 sys 0m0.098s 00:06:40.386 23:19:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.386 ************************************ 00:06:40.386 END TEST bdev_json_nonarray 00:06:40.386 ************************************ 00:06:40.386 23:19:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:40.386 23:19:26 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:40.386 00:06:40.386 real 0m30.032s 00:06:40.386 user 0m47.375s 00:06:40.386 sys 0m4.758s 00:06:40.386 23:19:26 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.386 ************************************ 00:06:40.386 END TEST blockdev_nvme 00:06:40.386 ************************************ 00:06:40.386 23:19:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.386 23:19:26 -- spdk/autotest.sh@209 -- # uname -s 00:06:40.386 23:19:26 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:40.386 23:19:26 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:40.386 23:19:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.386 23:19:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.386 23:19:26 -- common/autotest_common.sh@10 -- # set +x 00:06:40.386 ************************************ 00:06:40.386 START TEST blockdev_nvme_gpt 00:06:40.386 ************************************ 00:06:40.386 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:40.645 * Looking for test storage... 00:06:40.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.645 23:19:26 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.645 --rc genhtml_branch_coverage=1 00:06:40.645 --rc genhtml_function_coverage=1 00:06:40.645 --rc genhtml_legend=1 00:06:40.645 --rc geninfo_all_blocks=1 00:06:40.645 --rc geninfo_unexecuted_blocks=1 00:06:40.645 00:06:40.645 ' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.645 --rc genhtml_branch_coverage=1 00:06:40.645 --rc genhtml_function_coverage=1 00:06:40.645 --rc genhtml_legend=1 00:06:40.645 --rc geninfo_all_blocks=1 00:06:40.645 --rc geninfo_unexecuted_blocks=1 00:06:40.645 00:06:40.645 ' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.645 --rc genhtml_branch_coverage=1 00:06:40.645 --rc genhtml_function_coverage=1 00:06:40.645 --rc genhtml_legend=1 00:06:40.645 --rc geninfo_all_blocks=1 00:06:40.645 --rc geninfo_unexecuted_blocks=1 00:06:40.645 00:06:40.645 ' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.645 --rc genhtml_branch_coverage=1 00:06:40.645 --rc genhtml_function_coverage=1 00:06:40.645 --rc genhtml_legend=1 00:06:40.645 --rc geninfo_all_blocks=1 00:06:40.645 --rc geninfo_unexecuted_blocks=1 00:06:40.645 00:06:40.645 ' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72642 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:40.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.645 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72642 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72642 ']' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.645 23:19:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:40.646 23:19:26 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:40.646 [2024-11-19 23:19:26.799268] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:40.646 [2024-11-19 23:19:26.799400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72642 ] 00:06:40.906 [2024-11-19 23:19:26.957708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.906 [2024-11-19 23:19:26.978101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.477 23:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.477 23:19:27 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:41.477 23:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:41.477 23:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:41.477 23:19:27 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:41.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.000 Waiting for block devices as requested 00:06:42.000 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.000 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.259 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.259 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.574 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:47.574 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:47.574 BYT; 00:06:47.574 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:47.575 BYT; 00:06:47.575 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.575 23:19:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.575 23:19:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:48.510 The operation has completed successfully. 00:06:48.510 23:19:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:49.443 The operation has completed successfully. 00:06:49.444 23:19:35 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:49.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.267 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.267 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.267 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:50.267 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.267 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.267 [] 00:06:50.267 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.267 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:50.267 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:50.267 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:50.267 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:50.525 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:50.525 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.525 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.783 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:50.783 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:50.784 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dd936de1-ae58-45a9-88b5-2e247c32bda5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dd936de1-ae58-45a9-88b5-2e247c32bda5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "23e3639e-2313-4e13-b7a7-b13fcd6251bc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23e3639e-2313-4e13-b7a7-b13fcd6251bc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "359f328a-55a6-4515-91be-bb76e41d9893"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "359f328a-55a6-4515-91be-bb76e41d9893",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "7fde8d79-37d1-422c-ad1d-c1564322b3a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7fde8d79-37d1-422c-ad1d-c1564322b3a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ccd4a034-72ad-41b5-80b0-bcc82244a5f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ccd4a034-72ad-41b5-80b0-bcc82244a5f3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:50.784 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:50.784 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:50.784 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:50.784 23:19:36 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72642 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72642 ']' 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72642 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72642 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72642' 00:06:50.784 killing process with pid 72642 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72642 00:06:50.784 23:19:36 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72642 00:06:51.040 23:19:37 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:51.040 23:19:37 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.040 23:19:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:51.040 23:19:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.040 23:19:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.040 ************************************ 00:06:51.040 START TEST bdev_hello_world 00:06:51.040 ************************************ 00:06:51.040 23:19:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:51.299 [2024-11-19 23:19:37.236605] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:51.299 [2024-11-19 23:19:37.236724] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73251 ] 00:06:51.299 [2024-11-19 23:19:37.392109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.299 [2024-11-19 23:19:37.410807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.864 [2024-11-19 23:19:37.778812] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:51.864 [2024-11-19 23:19:37.778861] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:51.864 [2024-11-19 23:19:37.778879] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:51.864 [2024-11-19 23:19:37.780947] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:51.864 [2024-11-19 23:19:37.781480] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:51.864 [2024-11-19 23:19:37.781511] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:51.864 [2024-11-19 23:19:37.781673] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:51.864 00:06:51.864 [2024-11-19 23:19:37.781696] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:51.864 00:06:51.864 real 0m0.747s 00:06:51.864 user 0m0.497s 00:06:51.864 sys 0m0.148s 00:06:51.864 23:19:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:51.865 ************************************ 00:06:51.865 END TEST bdev_hello_world 00:06:51.865 ************************************ 00:06:51.865 23:19:37 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:51.865 23:19:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.865 23:19:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.865 23:19:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.865 ************************************ 00:06:51.865 START TEST bdev_bounds 00:06:51.865 ************************************ 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:51.865 Process bdevio pid: 73282 00:06:51.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73282 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73282' 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73282 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73282 ']' 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.865 23:19:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:51.865 [2024-11-19 23:19:38.022216] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:51.865 [2024-11-19 23:19:38.022335] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73282 ] 00:06:52.123 [2024-11-19 23:19:38.178630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.123 [2024-11-19 23:19:38.199584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.123 [2024-11-19 23:19:38.199817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.123 [2024-11-19 23:19:38.199898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.687 23:19:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.687 23:19:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:52.687 23:19:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:52.945 I/O targets: 00:06:52.945 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:52.945 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:52.945 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:52.945 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.945 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.945 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.945 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:52.945 00:06:52.945 00:06:52.945 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.945 http://cunit.sourceforge.net/ 00:06:52.945 00:06:52.945 00:06:52.945 Suite: bdevio tests on: Nvme3n1 00:06:52.945 Test: blockdev write read block ...passed 00:06:52.945 Test: blockdev write zeroes read block ...passed 00:06:52.945 Test: blockdev write zeroes read no split ...passed 00:06:52.945 Test: blockdev write zeroes read split ...passed 00:06:52.945 Test: blockdev write zeroes read split partial ...passed 00:06:52.945 Test: blockdev reset ...[2024-11-19 23:19:38.960300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:52.945 [2024-11-19 23:19:38.962169] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:52.945 passed 00:06:52.945 Test: blockdev write read 8 blocks ...passed 00:06:52.945 Test: blockdev write read size > 128k ...passed 00:06:52.945 Test: blockdev write read invalid size ...passed 00:06:52.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.945 Test: blockdev write read max offset ...passed 00:06:52.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.945 Test: blockdev writev readv 8 blocks ...passed 00:06:52.945 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.945 Test: blockdev writev readv block ...passed 00:06:52.945 Test: blockdev writev readv size > 128k ...passed 00:06:52.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.945 Test: blockdev comparev and writev ...[2024-11-19 23:19:38.967183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c480e000 len:0x1000 00:06:52.945 [2024-11-19 23:19:38.967227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme passthru rw ...passed 00:06:52.945 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.945 Test: blockdev nvme admin passthru ...[2024-11-19 23:19:38.967846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.945 [2024-11-19 23:19:38.967873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev copy ...passed 00:06:52.945 Suite: bdevio tests on: Nvme2n3 00:06:52.945 Test: blockdev write read block ...passed 00:06:52.945 Test: blockdev write zeroes read block ...passed 00:06:52.945 Test: blockdev write zeroes read no split ...passed 00:06:52.945 Test: blockdev write zeroes read split ...passed 00:06:52.945 Test: blockdev write zeroes read split partial ...passed 00:06:52.945 Test: blockdev reset ...[2024-11-19 23:19:38.982960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:52.945 [2024-11-19 23:19:38.984866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:52.945 passed 00:06:52.945 Test: blockdev write read 8 blocks ...passed 00:06:52.945 Test: blockdev write read size > 128k ...passed 00:06:52.945 Test: blockdev write read invalid size ...passed 00:06:52.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.945 Test: blockdev write read max offset ...passed 00:06:52.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.945 Test: blockdev writev readv 8 blocks ...passed 00:06:52.945 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.945 Test: blockdev writev readv block ...passed 00:06:52.945 Test: blockdev writev readv size > 128k ...passed 00:06:52.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.945 Test: blockdev comparev and writev ...[2024-11-19 23:19:38.989543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c480a000 len:0x1000 00:06:52.945 [2024-11-19 23:19:38.989583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme passthru rw ...passed 00:06:52.945 Test: blockdev nvme passthru vendor specific ...[2024-11-19 23:19:38.990190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.945 [2024-11-19 23:19:38.990214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme admin passthru ...passed 00:06:52.945 Test: blockdev copy ...passed 00:06:52.945 Suite: bdevio tests on: Nvme2n2 00:06:52.945 Test: blockdev write read block ...passed 00:06:52.945 Test: blockdev write zeroes read block ...passed 00:06:52.945 Test: blockdev write zeroes read no split ...passed 00:06:52.945 Test: blockdev write zeroes read split ...passed 00:06:52.945 Test: blockdev write zeroes read split partial ...passed 00:06:52.945 Test: blockdev reset ...[2024-11-19 23:19:39.005186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:52.945 [2024-11-19 23:19:39.006962] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:52.945 passed 00:06:52.945 Test: blockdev write read 8 blocks ...passed 00:06:52.945 Test: blockdev write read size > 128k ...passed 00:06:52.945 Test: blockdev write read invalid size ...passed 00:06:52.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.945 Test: blockdev write read max offset ...passed 00:06:52.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.945 Test: blockdev writev readv 8 blocks ...passed 00:06:52.945 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.945 Test: blockdev writev readv block ...passed 00:06:52.945 Test: blockdev writev readv size > 128k ...passed 00:06:52.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.945 Test: blockdev comparev and writev ...[2024-11-19 23:19:39.013496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae405000 len:0x1000 00:06:52.945 [2024-11-19 23:19:39.013604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme passthru rw ...passed 00:06:52.945 Test: blockdev nvme passthru vendor specific ...[2024-11-19 23:19:39.014443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.945 [2024-11-19 23:19:39.014539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme admin passthru ...passed 00:06:52.945 Test: blockdev copy ...passed 00:06:52.945 Suite: bdevio tests on: Nvme2n1 00:06:52.945 Test: blockdev write read block ...passed 00:06:52.945 Test: blockdev write zeroes read block ...passed 00:06:52.945 Test: blockdev write zeroes read no split ...passed 00:06:52.945 Test: blockdev write zeroes read split ...passed 00:06:52.945 Test: blockdev write zeroes read split partial ...passed 00:06:52.945 Test: blockdev reset ...[2024-11-19 23:19:39.028271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:52.945 [2024-11-19 23:19:39.029833] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:52.945 passed 00:06:52.945 Test: blockdev write read 8 blocks ...passed 00:06:52.945 Test: blockdev write read size > 128k ...passed 00:06:52.945 Test: blockdev write read invalid size ...passed 00:06:52.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.945 Test: blockdev write read max offset ...passed 00:06:52.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.945 Test: blockdev writev readv 8 blocks ...passed 00:06:52.945 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.945 Test: blockdev writev readv block ...passed 00:06:52.945 Test: blockdev writev readv size > 128k ...passed 00:06:52.945 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.945 Test: blockdev comparev and writev ...[2024-11-19 23:19:39.034750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4c02000 len:0x1000 00:06:52.945 [2024-11-19 23:19:39.034789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme passthru rw ...passed 00:06:52.945 Test: blockdev nvme passthru vendor specific ...[2024-11-19 23:19:39.035415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.945 [2024-11-19 23:19:39.035441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.945 passed 00:06:52.945 Test: blockdev nvme admin passthru ...passed 00:06:52.945 Test: blockdev copy ...passed 00:06:52.945 Suite: bdevio tests on: Nvme1n1p2 00:06:52.945 Test: blockdev write read block ...passed 00:06:52.945 Test: blockdev write zeroes read block ...passed 00:06:52.945 Test: blockdev write zeroes read no split ...passed 00:06:52.945 Test: blockdev write zeroes read split ...passed 00:06:52.945 Test: blockdev write zeroes read split partial ...passed 00:06:52.945 Test: blockdev reset ...[2024-11-19 23:19:39.050523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:52.945 [2024-11-19 23:19:39.052150] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:52.945 passed 00:06:52.945 Test: blockdev write read 8 blocks ...passed 00:06:52.945 Test: blockdev write read size > 128k ...passed 00:06:52.945 Test: blockdev write read invalid size ...passed 00:06:52.945 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.945 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.945 Test: blockdev write read max offset ...passed 00:06:52.945 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.945 Test: blockdev writev readv 8 blocks ...passed 00:06:52.945 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.945 Test: blockdev writev readv block ...passed 00:06:52.945 Test: blockdev writev readv size > 128k ...passed 00:06:52.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.946 Test: blockdev comparev and writev ...[2024-11-19 23:19:39.057246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e043b000 len:0x1000 00:06:52.946 [2024-11-19 23:19:39.057283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.946 passed 00:06:52.946 Test: blockdev nvme passthru rw ...passed 00:06:52.946 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.946 Test: blockdev nvme admin passthru ...passed 00:06:52.946 Test: blockdev copy ...passed 00:06:52.946 Suite: bdevio tests on: Nvme1n1p1 00:06:52.946 Test: blockdev write read block ...passed 00:06:52.946 Test: blockdev write zeroes read block ...passed 00:06:52.946 Test: blockdev write zeroes read no split ...passed 00:06:52.946 Test: blockdev write zeroes read split ...passed 00:06:52.946 Test: blockdev write zeroes read split partial ...passed 00:06:52.946 Test: blockdev reset ...[2024-11-19 23:19:39.067910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:52.946 [2024-11-19 23:19:39.069194] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:52.946 passed 00:06:52.946 Test: blockdev write read 8 blocks ...passed 00:06:52.946 Test: blockdev write read size > 128k ...passed 00:06:52.946 Test: blockdev write read invalid size ...passed 00:06:52.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.946 Test: blockdev write read max offset ...passed 00:06:52.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.946 Test: blockdev writev readv 8 blocks ...passed 00:06:52.946 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.946 Test: blockdev writev readv block ...passed 00:06:52.946 Test: blockdev writev readv size > 128k ...passed 00:06:52.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.946 Test: blockdev comparev and writev ...[2024-11-19 23:19:39.073379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e0437000 len:0x1000 00:06:52.946 [2024-11-19 23:19:39.073417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.946 passed 00:06:52.946 Test: blockdev nvme passthru rw ...passed 00:06:52.946 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.946 Test: blockdev nvme admin passthru ...passed 00:06:52.946 Test: blockdev copy ...passed 00:06:52.946 Suite: bdevio tests on: Nvme0n1 00:06:52.946 Test: blockdev write read block ...passed 00:06:52.946 Test: blockdev write zeroes read block ...passed 00:06:52.946 Test: blockdev write zeroes read no split ...passed 00:06:52.946 Test: blockdev write zeroes read split ...passed 00:06:52.946 Test: blockdev write zeroes read split partial ...passed 00:06:52.946 Test: blockdev reset ...[2024-11-19 23:19:39.083700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:52.946 [2024-11-19 23:19:39.085062] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:52.946 passed 00:06:52.946 Test: blockdev write read 8 blocks ...passed 00:06:52.946 Test: blockdev write read size > 128k ...passed 00:06:52.946 Test: blockdev write read invalid size ...passed 00:06:52.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.946 Test: blockdev write read max offset ...passed 00:06:52.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.946 Test: blockdev writev readv 8 blocks ...passed 00:06:52.946 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.946 Test: blockdev writev readv block ...passed 00:06:52.946 Test: blockdev writev readv size > 128k ...passed 00:06:52.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.946 Test: blockdev comparev and writev ...[2024-11-19 23:19:39.088812] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:52.946 separate metadata which is not supported yet. 00:06:52.946 passed 00:06:52.946 Test: blockdev nvme passthru rw ...passed 00:06:52.946 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.946 Test: blockdev nvme admin passthru ...[2024-11-19 23:19:39.089118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:52.946 [2024-11-19 23:19:39.089154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:52.946 passed 00:06:52.946 Test: blockdev copy ...passed 00:06:52.946 00:06:52.946 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.946 suites 7 7 n/a 0 0 00:06:52.946 tests 161 161 161 0 0 00:06:52.946 asserts 1025 1025 1025 0 n/a 00:06:52.946 00:06:52.946 Elapsed time = 0.337 seconds 00:06:52.946 0 00:06:52.946 23:19:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73282 00:06:52.946 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73282 ']' 00:06:52.946 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73282 00:06:52.946 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:52.946 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.946 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73282 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.202 killing process with pid 73282 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73282' 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73282 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73282 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:53.202 00:06:53.202 real 0m1.306s 00:06:53.202 user 0m3.375s 00:06:53.202 sys 0m0.237s 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.202 23:19:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:53.202 ************************************ 00:06:53.202 END TEST bdev_bounds 00:06:53.202 ************************************ 00:06:53.202 23:19:39 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:53.202 23:19:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:53.202 23:19:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.203 23:19:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 ************************************ 00:06:53.203 START TEST bdev_nbd 00:06:53.203 ************************************ 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73331 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73331 /var/tmp/spdk-nbd.sock 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73331 ']' 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:53.203 23:19:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 [2024-11-19 23:19:39.376818] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:06:53.203 [2024-11-19 23:19:39.376927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.459 [2024-11-19 23:19:39.536854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.459 [2024-11-19 23:19:39.555705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:54.024 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.283 1+0 records in 00:06:54.283 1+0 records out 00:06:54.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427436 s, 9.6 MB/s 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.283 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.544 1+0 records in 00:06:54.544 1+0 records out 00:06:54.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454908 s, 9.0 MB/s 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.544 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.802 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.803 1+0 records in 00:06:54.803 1+0 records out 00:06:54.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448958 s, 9.1 MB/s 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.803 23:19:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.063 1+0 records in 00:06:55.063 1+0 records out 00:06:55.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477258 s, 8.6 MB/s 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.063 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.322 1+0 records in 00:06:55.322 1+0 records out 00:06:55.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257133 s, 15.9 MB/s 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.322 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.583 1+0 records in 00:06:55.583 1+0 records out 00:06:55.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507165 s, 8.1 MB/s 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.583 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.845 1+0 records in 00:06:55.845 1+0 records out 00:06:55.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118094 s, 3.5 MB/s 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.845 23:19:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd0", 00:06:56.104 "bdev_name": "Nvme0n1" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd1", 00:06:56.104 "bdev_name": "Nvme1n1p1" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd2", 00:06:56.104 "bdev_name": "Nvme1n1p2" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd3", 00:06:56.104 "bdev_name": "Nvme2n1" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd4", 00:06:56.104 "bdev_name": "Nvme2n2" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd5", 00:06:56.104 "bdev_name": "Nvme2n3" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd6", 00:06:56.104 "bdev_name": "Nvme3n1" 00:06:56.104 } 00:06:56.104 ]' 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd0", 00:06:56.104 "bdev_name": "Nvme0n1" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd1", 00:06:56.104 "bdev_name": "Nvme1n1p1" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd2", 00:06:56.104 "bdev_name": "Nvme1n1p2" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd3", 00:06:56.104 "bdev_name": "Nvme2n1" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd4", 00:06:56.104 "bdev_name": "Nvme2n2" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd5", 00:06:56.104 "bdev_name": "Nvme2n3" 00:06:56.104 }, 00:06:56.104 { 00:06:56.104 "nbd_device": "/dev/nbd6", 00:06:56.104 "bdev_name": "Nvme3n1" 00:06:56.104 } 00:06:56.104 ]' 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.104 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.362 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:56.627 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:56.627 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:56.627 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.628 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:56.891 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:56.891 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.892 23:19:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.149 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:57.409 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:57.409 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.410 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.667 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.668 23:19:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:57.929 /dev/nbd0 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.929 1+0 records in 00:06:57.929 1+0 records out 00:06:57.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050797 s, 8.1 MB/s 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:57.929 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:58.190 /dev/nbd1 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.190 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.190 1+0 records in 00:06:58.190 1+0 records out 00:06:58.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370646 s, 11.1 MB/s 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.191 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:58.452 /dev/nbd10 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.452 1+0 records in 00:06:58.452 1+0 records out 00:06:58.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491243 s, 8.3 MB/s 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.452 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:58.713 /dev/nbd11 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.713 1+0 records in 00:06:58.713 1+0 records out 00:06:58.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282081 s, 14.5 MB/s 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.713 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:58.972 /dev/nbd12 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.972 1+0 records in 00:06:58.972 1+0 records out 00:06:58.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358596 s, 11.4 MB/s 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.972 23:19:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:58.972 /dev/nbd13 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.231 1+0 records in 00:06:59.231 1+0 records out 00:06:59.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466789 s, 8.8 MB/s 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:59.231 /dev/nbd14 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.231 1+0 records in 00:06:59.231 1+0 records out 00:06:59.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546456 s, 7.5 MB/s 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.231 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd0", 00:06:59.490 "bdev_name": "Nvme0n1" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd1", 00:06:59.490 "bdev_name": "Nvme1n1p1" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd10", 00:06:59.490 "bdev_name": "Nvme1n1p2" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd11", 00:06:59.490 "bdev_name": "Nvme2n1" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd12", 00:06:59.490 "bdev_name": "Nvme2n2" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd13", 00:06:59.490 "bdev_name": "Nvme2n3" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd14", 00:06:59.490 "bdev_name": "Nvme3n1" 00:06:59.490 } 00:06:59.490 ]' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd0", 00:06:59.490 "bdev_name": "Nvme0n1" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd1", 00:06:59.490 "bdev_name": "Nvme1n1p1" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd10", 00:06:59.490 "bdev_name": "Nvme1n1p2" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd11", 00:06:59.490 "bdev_name": "Nvme2n1" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd12", 00:06:59.490 "bdev_name": "Nvme2n2" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd13", 00:06:59.490 "bdev_name": "Nvme2n3" 00:06:59.490 }, 00:06:59.490 { 00:06:59.490 "nbd_device": "/dev/nbd14", 00:06:59.490 "bdev_name": "Nvme3n1" 00:06:59.490 } 00:06:59.490 ]' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.490 /dev/nbd1 00:06:59.490 /dev/nbd10 00:06:59.490 /dev/nbd11 00:06:59.490 /dev/nbd12 00:06:59.490 /dev/nbd13 00:06:59.490 /dev/nbd14' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.490 /dev/nbd1 00:06:59.490 /dev/nbd10 00:06:59.490 /dev/nbd11 00:06:59.490 /dev/nbd12 00:06:59.490 /dev/nbd13 00:06:59.490 /dev/nbd14' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:59.490 256+0 records in 00:06:59.490 256+0 records out 00:06:59.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116937 s, 89.7 MB/s 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.490 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.749 256+0 records in 00:06:59.749 256+0 records out 00:06:59.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707069 s, 14.8 MB/s 00:06:59.749 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.749 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.749 256+0 records in 00:06:59.749 256+0 records out 00:06:59.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0838122 s, 12.5 MB/s 00:06:59.749 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.749 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:59.749 256+0 records in 00:06:59.749 256+0 records out 00:06:59.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0708554 s, 14.8 MB/s 00:06:59.749 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.749 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:00.008 256+0 records in 00:07:00.008 256+0 records out 00:07:00.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0748823 s, 14.0 MB/s 00:07:00.008 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.008 23:19:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:00.008 256+0 records in 00:07:00.008 256+0 records out 00:07:00.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182305 s, 5.8 MB/s 00:07:00.008 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.008 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:00.268 256+0 records in 00:07:00.268 256+0 records out 00:07:00.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0859108 s, 12.2 MB/s 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:00.268 256+0 records in 00:07:00.268 256+0 records out 00:07:00.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0776203 s, 13.5 MB/s 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.268 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.528 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.788 23:19:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.049 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.307 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.569 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.841 23:19:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:02.107 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:02.107 malloc_lvol_verify 00:07:02.367 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:02.367 6d9d0ae0-453e-464f-80e2-5947f6b4dea4 00:07:02.367 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:02.625 a02d13cf-1bc3-4478-8a15-6447ea548748 00:07:02.625 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:02.883 /dev/nbd0 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:02.883 mke2fs 1.47.0 (5-Feb-2023) 00:07:02.883 Discarding device blocks: 0/4096 done 00:07:02.883 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:02.883 00:07:02.883 Allocating group tables: 0/1 done 00:07:02.883 Writing inode tables: 0/1 done 00:07:02.883 Creating journal (1024 blocks): done 00:07:02.883 Writing superblocks and filesystem accounting information: 0/1 done 00:07:02.883 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.883 23:19:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.142 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.142 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73331 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73331 ']' 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73331 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73331 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.143 killing process with pid 73331 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73331' 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73331 00:07:03.143 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73331 00:07:03.402 23:19:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:03.402 00:07:03.402 real 0m10.032s 00:07:03.402 user 0m14.632s 00:07:03.402 sys 0m3.413s 00:07:03.402 ************************************ 00:07:03.402 END TEST bdev_nbd 00:07:03.402 ************************************ 00:07:03.402 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.402 23:19:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:03.402 23:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:03.402 23:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:03.402 23:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:03.402 skipping fio tests on NVMe due to multi-ns failures. 00:07:03.402 23:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:03.402 23:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:03.402 23:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:03.402 23:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:03.402 23:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.402 23:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.402 ************************************ 00:07:03.402 START TEST bdev_verify 00:07:03.402 ************************************ 00:07:03.402 23:19:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:03.402 [2024-11-19 23:19:49.442293] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:03.402 [2024-11-19 23:19:49.442405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73733 ] 00:07:03.659 [2024-11-19 23:19:49.598581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.659 [2024-11-19 23:19:49.617928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.659 [2024-11-19 23:19:49.618004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.918 Running I/O for 5 seconds... 00:07:06.227 20864.00 IOPS, 81.50 MiB/s [2024-11-19T23:19:53.352Z] 21312.00 IOPS, 83.25 MiB/s [2024-11-19T23:19:54.292Z] 20736.00 IOPS, 81.00 MiB/s [2024-11-19T23:19:55.255Z] 20192.00 IOPS, 78.88 MiB/s [2024-11-19T23:19:55.255Z] 20659.20 IOPS, 80.70 MiB/s 00:07:09.063 Latency(us) 00:07:09.063 [2024-11-19T23:19:55.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.063 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0xbd0bd 00:07:09.063 Nvme0n1 : 5.07 1464.59 5.72 0.00 0.00 87202.66 15022.87 91145.45 00:07:09.063 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:09.063 Nvme0n1 : 5.06 1440.72 5.63 0.00 0.00 88533.91 17241.01 90338.86 00:07:09.063 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0x4ff80 00:07:09.063 Nvme1n1p1 : 5.07 1463.82 5.72 0.00 0.00 87058.04 15325.34 84289.38 00:07:09.063 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:09.063 Nvme1n1p1 : 5.07 1440.29 5.63 0.00 0.00 88368.75 18047.61 81869.59 00:07:09.063 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0x4ff7f 00:07:09.063 Nvme1n1p2 : 5.07 1463.40 5.72 0.00 0.00 86878.48 14317.10 79046.50 00:07:09.063 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:09.063 Nvme1n1p2 : 5.07 1439.86 5.62 0.00 0.00 88187.99 17140.18 79046.50 00:07:09.063 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0x80000 00:07:09.063 Nvme2n1 : 5.07 1463.02 5.71 0.00 0.00 86705.64 13812.97 75416.81 00:07:09.063 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x80000 length 0x80000 00:07:09.063 Nvme2n1 : 5.07 1439.48 5.62 0.00 0.00 88022.13 18047.61 76223.41 00:07:09.063 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0x80000 00:07:09.063 Nvme2n2 : 5.08 1462.63 5.71 0.00 0.00 86531.88 13107.20 76626.71 00:07:09.063 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x80000 length 0x80000 00:07:09.063 Nvme2n2 : 5.08 1448.74 5.66 0.00 0.00 87327.78 3163.37 74206.92 00:07:09.063 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0x80000 00:07:09.063 Nvme2n3 : 5.08 1462.20 5.71 0.00 0.00 86363.78 12855.14 79046.50 00:07:09.063 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x80000 length 0x80000 00:07:09.063 Nvme2n3 : 5.08 1448.33 5.66 0.00 0.00 87153.67 3352.42 75013.51 00:07:09.063 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x0 length 0x20000 00:07:09.063 Nvme3n1 : 5.08 1472.89 5.75 0.00 0.00 85600.59 1676.21 83079.48 00:07:09.063 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:09.063 Verification LBA range: start 0x20000 length 0x20000 00:07:09.063 Nvme3n1 : 5.09 1458.69 5.70 0.00 0.00 86406.18 5192.47 80659.69 00:07:09.063 [2024-11-19T23:19:55.255Z] =================================================================================================================== 00:07:09.063 [2024-11-19T23:19:55.255Z] Total : 20368.65 79.57 0.00 0.00 87160.68 1676.21 91145.45 00:07:09.635 00:07:09.635 real 0m6.374s 00:07:09.635 user 0m12.042s 00:07:09.635 sys 0m0.205s 00:07:09.635 23:19:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.635 23:19:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:09.635 ************************************ 00:07:09.635 END TEST bdev_verify 00:07:09.635 ************************************ 00:07:09.635 23:19:55 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:09.635 23:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:09.635 23:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.635 23:19:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.635 ************************************ 00:07:09.635 START TEST bdev_verify_big_io 00:07:09.635 ************************************ 00:07:09.635 23:19:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:09.895 [2024-11-19 23:19:55.895025] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:09.895 [2024-11-19 23:19:55.895175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73826 ] 00:07:09.895 [2024-11-19 23:19:56.058770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.155 [2024-11-19 23:19:56.089568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.155 [2024-11-19 23:19:56.089614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.415 Running I/O for 5 seconds... 00:07:15.502 317.00 IOPS, 19.81 MiB/s [2024-11-19T23:20:02.633Z] 2251.50 IOPS, 140.72 MiB/s [2024-11-19T23:20:02.892Z] 3134.00 IOPS, 195.88 MiB/s 00:07:16.700 Latency(us) 00:07:16.700 [2024-11-19T23:20:02.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.700 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0xbd0b 00:07:16.700 Nvme0n1 : 5.78 109.83 6.86 0.00 0.00 1101244.40 26012.75 1213121.77 00:07:16.700 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:16.700 Nvme0n1 : 5.78 116.38 7.27 0.00 0.00 1043587.50 18450.90 1232480.10 00:07:16.700 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0x4ff8 00:07:16.700 Nvme1n1p1 : 5.87 114.37 7.15 0.00 0.00 1033083.57 89532.26 1135688.47 00:07:16.700 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:16.700 Nvme1n1p1 : 5.68 116.51 7.28 0.00 0.00 1019369.76 98808.12 1045349.61 00:07:16.700 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0x4ff7 00:07:16.700 Nvme1n1p2 : 5.87 114.51 7.16 0.00 0.00 1000063.76 103244.41 1148594.02 00:07:16.700 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:16.700 Nvme1n1p2 : 5.79 121.68 7.61 0.00 0.00 951996.04 101227.91 916294.10 00:07:16.700 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0x8000 00:07:16.700 Nvme2n1 : 5.96 125.27 7.83 0.00 0.00 906827.40 47589.22 1051802.39 00:07:16.700 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x8000 length 0x8000 00:07:16.700 Nvme2n1 : 5.87 121.45 7.59 0.00 0.00 923784.26 82676.18 1361535.61 00:07:16.700 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0x8000 00:07:16.700 Nvme2n2 : 5.96 128.90 8.06 0.00 0.00 857563.11 38515.00 1077613.49 00:07:16.700 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x8000 length 0x8000 00:07:16.700 Nvme2n2 : 5.98 124.62 7.79 0.00 0.00 877640.52 51622.20 1987454.82 00:07:16.700 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0x8000 00:07:16.700 Nvme2n3 : 5.98 132.34 8.27 0.00 0.00 808200.00 24298.73 1161499.57 00:07:16.700 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x8000 length 0x8000 00:07:16.700 Nvme2n3 : 6.07 134.83 8.43 0.00 0.00 784520.76 29440.79 2039077.02 00:07:16.700 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x0 length 0x2000 00:07:16.700 Nvme3n1 : 6.08 158.01 9.88 0.00 0.00 659392.94 850.71 1129235.69 00:07:16.700 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:16.700 Verification LBA range: start 0x2000 length 0x2000 00:07:16.700 Nvme3n1 : 6.10 165.48 10.34 0.00 0.00 624408.79 652.21 2064888.12 00:07:16.700 [2024-11-19T23:20:02.892Z] =================================================================================================================== 00:07:16.700 [2024-11-19T23:20:02.892Z] Total : 1784.20 111.51 0.00 0.00 880395.80 652.21 2064888.12 00:07:18.074 00:07:18.074 real 0m8.097s 00:07:18.074 user 0m15.361s 00:07:18.074 sys 0m0.295s 00:07:18.074 23:20:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.074 23:20:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:18.074 ************************************ 00:07:18.074 END TEST bdev_verify_big_io 00:07:18.074 ************************************ 00:07:18.074 23:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.074 23:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:18.074 23:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.074 23:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.074 ************************************ 00:07:18.074 START TEST bdev_write_zeroes 00:07:18.074 ************************************ 00:07:18.074 23:20:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:18.074 [2024-11-19 23:20:04.021162] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:18.074 [2024-11-19 23:20:04.021260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73924 ] 00:07:18.074 [2024-11-19 23:20:04.169548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.074 [2024-11-19 23:20:04.187691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.640 Running I/O for 1 seconds... 00:07:19.579 59136.00 IOPS, 231.00 MiB/s 00:07:19.579 Latency(us) 00:07:19.579 [2024-11-19T23:20:05.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.579 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme0n1 : 1.02 8449.24 33.00 0.00 0.00 15106.84 11342.77 46379.32 00:07:19.579 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme1n1p1 : 1.02 8438.62 32.96 0.00 0.00 15104.17 11141.12 46580.97 00:07:19.579 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme1n1p2 : 1.03 8428.29 32.92 0.00 0.00 15065.05 11695.66 46379.32 00:07:19.579 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme2n1 : 1.03 8418.78 32.89 0.00 0.00 15018.50 11544.42 45976.02 00:07:19.579 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme2n2 : 1.03 8409.26 32.85 0.00 0.00 14981.68 11645.24 45572.73 00:07:19.579 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme2n3 : 1.03 8455.98 33.03 0.00 0.00 14896.66 8065.97 46177.67 00:07:19.579 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:19.579 Nvme3n1 : 1.03 8446.51 32.99 0.00 0.00 14860.02 8368.44 45371.08 00:07:19.579 [2024-11-19T23:20:05.771Z] =================================================================================================================== 00:07:19.579 [2024-11-19T23:20:05.771Z] Total : 59046.68 230.65 0.00 0.00 15004.44 8065.97 46580.97 00:07:19.837 00:07:19.837 real 0m1.805s 00:07:19.837 user 0m1.538s 00:07:19.837 sys 0m0.156s 00:07:19.837 23:20:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.837 23:20:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:19.837 ************************************ 00:07:19.837 END TEST bdev_write_zeroes 00:07:19.837 ************************************ 00:07:19.837 23:20:05 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.837 23:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:19.837 23:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.837 23:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.837 ************************************ 00:07:19.837 START TEST bdev_json_nonenclosed 00:07:19.837 ************************************ 00:07:19.837 23:20:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:19.837 [2024-11-19 23:20:05.882546] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:19.837 [2024-11-19 23:20:05.882644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73961 ] 00:07:20.096 [2024-11-19 23:20:06.036967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.096 [2024-11-19 23:20:06.056575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.096 [2024-11-19 23:20:06.056660] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:20.096 [2024-11-19 23:20:06.056675] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.096 [2024-11-19 23:20:06.056691] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.096 00:07:20.096 real 0m0.290s 00:07:20.096 user 0m0.111s 00:07:20.096 sys 0m0.075s 00:07:20.096 23:20:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.096 ************************************ 00:07:20.096 END TEST bdev_json_nonenclosed 00:07:20.096 ************************************ 00:07:20.096 23:20:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:20.096 23:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.096 23:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:20.096 23:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.096 23:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.096 ************************************ 00:07:20.096 START TEST bdev_json_nonarray 00:07:20.096 ************************************ 00:07:20.096 23:20:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:20.096 [2024-11-19 23:20:06.218267] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:20.096 [2024-11-19 23:20:06.218361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73986 ] 00:07:20.356 [2024-11-19 23:20:06.370673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.356 [2024-11-19 23:20:06.391284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.356 [2024-11-19 23:20:06.391376] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:20.356 [2024-11-19 23:20:06.391391] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:20.356 [2024-11-19 23:20:06.391407] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.356 00:07:20.356 real 0m0.288s 00:07:20.356 user 0m0.113s 00:07:20.356 sys 0m0.073s 00:07:20.356 23:20:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.356 23:20:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:20.356 ************************************ 00:07:20.356 END TEST bdev_json_nonarray 00:07:20.356 ************************************ 00:07:20.356 23:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:20.356 23:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:20.356 23:20:06 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:20.357 23:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.357 23:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.357 23:20:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.357 ************************************ 00:07:20.357 START TEST bdev_gpt_uuid 00:07:20.357 ************************************ 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74006 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74006 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74006 ']' 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.357 23:20:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:20.615 [2024-11-19 23:20:06.565387] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:20.615 [2024-11-19 23:20:06.565484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:07:20.615 [2024-11-19 23:20:06.714626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.615 [2024-11-19 23:20:06.734461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.555 Some configs were skipped because the RPC state that can call them passed over. 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.555 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:21.814 { 00:07:21.814 "name": "Nvme1n1p1", 00:07:21.814 "aliases": [ 00:07:21.814 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:21.814 ], 00:07:21.814 "product_name": "GPT Disk", 00:07:21.814 "block_size": 4096, 00:07:21.814 "num_blocks": 655104, 00:07:21.814 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:21.814 "assigned_rate_limits": { 00:07:21.814 "rw_ios_per_sec": 0, 00:07:21.814 "rw_mbytes_per_sec": 0, 00:07:21.814 "r_mbytes_per_sec": 0, 00:07:21.814 "w_mbytes_per_sec": 0 00:07:21.814 }, 00:07:21.814 "claimed": false, 00:07:21.814 "zoned": false, 00:07:21.814 "supported_io_types": { 00:07:21.814 "read": true, 00:07:21.814 "write": true, 00:07:21.814 "unmap": true, 00:07:21.814 "flush": true, 00:07:21.814 "reset": true, 00:07:21.814 "nvme_admin": false, 00:07:21.814 "nvme_io": false, 00:07:21.814 "nvme_io_md": false, 00:07:21.814 "write_zeroes": true, 00:07:21.814 "zcopy": false, 00:07:21.814 "get_zone_info": false, 00:07:21.814 "zone_management": false, 00:07:21.814 "zone_append": false, 00:07:21.814 "compare": true, 00:07:21.814 "compare_and_write": false, 00:07:21.814 "abort": true, 00:07:21.814 "seek_hole": false, 00:07:21.814 "seek_data": false, 00:07:21.814 "copy": true, 00:07:21.814 "nvme_iov_md": false 00:07:21.814 }, 00:07:21.814 "driver_specific": { 00:07:21.814 "gpt": { 00:07:21.814 "base_bdev": "Nvme1n1", 00:07:21.814 "offset_blocks": 256, 00:07:21.814 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:21.814 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:21.814 "partition_name": "SPDK_TEST_first" 00:07:21.814 } 00:07:21.814 } 00:07:21.814 } 00:07:21.814 ]' 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.814 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:21.814 { 00:07:21.814 "name": "Nvme1n1p2", 00:07:21.814 "aliases": [ 00:07:21.814 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:21.814 ], 00:07:21.814 "product_name": "GPT Disk", 00:07:21.814 "block_size": 4096, 00:07:21.814 "num_blocks": 655103, 00:07:21.814 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:21.814 "assigned_rate_limits": { 00:07:21.814 "rw_ios_per_sec": 0, 00:07:21.814 "rw_mbytes_per_sec": 0, 00:07:21.814 "r_mbytes_per_sec": 0, 00:07:21.814 "w_mbytes_per_sec": 0 00:07:21.814 }, 00:07:21.814 "claimed": false, 00:07:21.814 "zoned": false, 00:07:21.814 "supported_io_types": { 00:07:21.815 "read": true, 00:07:21.815 "write": true, 00:07:21.815 "unmap": true, 00:07:21.815 "flush": true, 00:07:21.815 "reset": true, 00:07:21.815 "nvme_admin": false, 00:07:21.815 "nvme_io": false, 00:07:21.815 "nvme_io_md": false, 00:07:21.815 "write_zeroes": true, 00:07:21.815 "zcopy": false, 00:07:21.815 "get_zone_info": false, 00:07:21.815 "zone_management": false, 00:07:21.815 "zone_append": false, 00:07:21.815 "compare": true, 00:07:21.815 "compare_and_write": false, 00:07:21.815 "abort": true, 00:07:21.815 "seek_hole": false, 00:07:21.815 "seek_data": false, 00:07:21.815 "copy": true, 00:07:21.815 "nvme_iov_md": false 00:07:21.815 }, 00:07:21.815 "driver_specific": { 00:07:21.815 "gpt": { 00:07:21.815 "base_bdev": "Nvme1n1", 00:07:21.815 "offset_blocks": 655360, 00:07:21.815 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:21.815 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:21.815 "partition_name": "SPDK_TEST_second" 00:07:21.815 } 00:07:21.815 } 00:07:21.815 } 00:07:21.815 ]' 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74006 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74006 ']' 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74006 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74006 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.815 killing process with pid 74006 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74006' 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74006 00:07:21.815 23:20:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74006 00:07:22.076 00:07:22.076 real 0m1.722s 00:07:22.076 user 0m1.892s 00:07:22.076 sys 0m0.323s 00:07:22.076 23:20:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.076 23:20:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:22.076 ************************************ 00:07:22.076 END TEST bdev_gpt_uuid 00:07:22.076 ************************************ 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:22.076 23:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:22.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.647 Waiting for block devices as requested 00:07:22.647 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.647 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.647 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:22.908 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:28.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:28.188 23:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:28.188 23:20:13 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:28.188 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:28.188 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:28.188 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:28.188 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:28.188 23:20:14 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:28.188 00:07:28.188 real 0m47.648s 00:07:28.188 user 1m1.103s 00:07:28.188 sys 0m7.330s 00:07:28.188 23:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.188 23:20:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.188 ************************************ 00:07:28.189 END TEST blockdev_nvme_gpt 00:07:28.189 ************************************ 00:07:28.189 23:20:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:28.189 23:20:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.189 23:20:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.189 23:20:14 -- common/autotest_common.sh@10 -- # set +x 00:07:28.189 ************************************ 00:07:28.189 START TEST nvme 00:07:28.189 ************************************ 00:07:28.189 23:20:14 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:28.189 * Looking for test storage... 00:07:28.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:28.189 23:20:14 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:28.189 23:20:14 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:28.189 23:20:14 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.447 23:20:14 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.447 23:20:14 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.447 23:20:14 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.447 23:20:14 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.447 23:20:14 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.447 23:20:14 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.447 23:20:14 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:28.447 23:20:14 nvme -- scripts/common.sh@345 -- # : 1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.447 23:20:14 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.447 23:20:14 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@353 -- # local d=1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.447 23:20:14 nvme -- scripts/common.sh@355 -- # echo 1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.447 23:20:14 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@353 -- # local d=2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.447 23:20:14 nvme -- scripts/common.sh@355 -- # echo 2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.447 23:20:14 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.447 23:20:14 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.447 23:20:14 nvme -- scripts/common.sh@368 -- # return 0 00:07:28.447 23:20:14 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.447 23:20:14 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 23:20:14 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 23:20:14 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 23:20:14 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.447 --rc genhtml_branch_coverage=1 00:07:28.447 --rc genhtml_function_coverage=1 00:07:28.447 --rc genhtml_legend=1 00:07:28.447 --rc geninfo_all_blocks=1 00:07:28.447 --rc geninfo_unexecuted_blocks=1 00:07:28.447 00:07:28.447 ' 00:07:28.447 23:20:14 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:28.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.282 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.282 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.282 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.282 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:29.282 23:20:15 nvme -- nvme/nvme.sh@79 -- # uname 00:07:29.282 23:20:15 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:29.282 23:20:15 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:29.282 23:20:15 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1075 -- # stubpid=74629 00:07:29.282 Waiting for stub to ready for secondary processes... 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74629 ]] 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:29.282 23:20:15 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:29.282 [2024-11-19 23:20:15.422667] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:07:29.282 [2024-11-19 23:20:15.422799] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:30.216 [2024-11-19 23:20:16.198360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.217 [2024-11-19 23:20:16.211674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.217 [2024-11-19 23:20:16.211785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.217 [2024-11-19 23:20:16.211855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.217 [2024-11-19 23:20:16.221516] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:30.217 [2024-11-19 23:20:16.221551] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.217 [2024-11-19 23:20:16.231842] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:30.217 [2024-11-19 23:20:16.232133] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:30.217 [2024-11-19 23:20:16.233102] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.217 [2024-11-19 23:20:16.233423] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:30.217 [2024-11-19 23:20:16.233530] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:30.217 [2024-11-19 23:20:16.234346] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.217 [2024-11-19 23:20:16.234695] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:30.217 [2024-11-19 23:20:16.234791] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:30.217 [2024-11-19 23:20:16.235406] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:30.217 [2024-11-19 23:20:16.235547] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:30.217 [2024-11-19 23:20:16.235628] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:30.217 [2024-11-19 23:20:16.235698] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:30.217 [2024-11-19 23:20:16.235782] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:30.217 23:20:16 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:30.217 done. 00:07:30.217 23:20:16 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:30.217 23:20:16 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:30.217 23:20:16 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:30.217 23:20:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.217 23:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.217 ************************************ 00:07:30.217 START TEST nvme_reset 00:07:30.217 ************************************ 00:07:30.474 23:20:16 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:30.474 Initializing NVMe Controllers 00:07:30.474 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:30.474 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:30.474 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:30.474 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:30.474 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:30.474 00:07:30.474 real 0m0.189s 00:07:30.474 user 0m0.073s 00:07:30.474 sys 0m0.073s 00:07:30.474 ************************************ 00:07:30.474 END TEST nvme_reset 00:07:30.474 23:20:16 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.474 23:20:16 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:30.474 ************************************ 00:07:30.474 23:20:16 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:30.474 23:20:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.474 23:20:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.474 23:20:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.474 ************************************ 00:07:30.474 START TEST nvme_identify 00:07:30.474 ************************************ 00:07:30.474 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:30.474 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:30.474 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:30.474 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:30.474 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:30.474 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:30.474 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:30.474 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:30.474 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.474 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:30.736 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:30.736 23:20:16 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:30.736 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:30.736 [2024-11-19 23:20:16.853237] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74650 terminated unexpected 00:07:30.736 ===================================================== 00:07:30.736 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:30.736 ===================================================== 00:07:30.736 Controller Capabilities/Features 00:07:30.736 ================================ 00:07:30.736 Vendor ID: 1b36 00:07:30.736 Subsystem Vendor ID: 1af4 00:07:30.736 Serial Number: 12340 00:07:30.736 Model Number: QEMU NVMe Ctrl 00:07:30.736 Firmware Version: 8.0.0 00:07:30.736 Recommended Arb Burst: 6 00:07:30.736 IEEE OUI Identifier: 00 54 52 00:07:30.736 Multi-path I/O 00:07:30.736 May have multiple subsystem ports: No 00:07:30.736 May have multiple controllers: No 00:07:30.736 Associated with SR-IOV VF: No 00:07:30.736 Max Data Transfer Size: 524288 00:07:30.736 Max Number of Namespaces: 256 00:07:30.736 Max Number of I/O Queues: 64 00:07:30.736 NVMe Specification Version (VS): 1.4 00:07:30.736 NVMe Specification Version (Identify): 1.4 00:07:30.736 Maximum Queue Entries: 2048 00:07:30.736 Contiguous Queues Required: Yes 00:07:30.736 Arbitration Mechanisms Supported 00:07:30.736 Weighted Round Robin: Not Supported 00:07:30.736 Vendor Specific: Not Supported 00:07:30.736 Reset Timeout: 7500 ms 00:07:30.736 Doorbell Stride: 4 bytes 00:07:30.736 NVM Subsystem Reset: Not Supported 00:07:30.736 Command Sets Supported 00:07:30.736 NVM Command Set: Supported 00:07:30.736 Boot Partition: Not Supported 00:07:30.736 Memory Page Size Minimum: 4096 bytes 00:07:30.736 Memory Page Size Maximum: 65536 bytes 00:07:30.736 Persistent Memory Region: Not Supported 00:07:30.736 Optional Asynchronous Events Supported 00:07:30.736 Namespace Attribute Notices: Supported 00:07:30.736 Firmware Activation Notices: Not Supported 00:07:30.736 ANA Change Notices: Not Supported 00:07:30.736 PLE Aggregate Log Change Notices: Not Supported 00:07:30.736 LBA Status Info Alert Notices: Not Supported 00:07:30.736 EGE Aggregate Log Change Notices: Not Supported 00:07:30.736 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.736 Zone Descriptor Change Notices: Not Supported 00:07:30.736 Discovery Log Change Notices: Not Supported 00:07:30.736 Controller Attributes 00:07:30.736 128-bit Host Identifier: Not Supported 00:07:30.736 Non-Operational Permissive Mode: Not Supported 00:07:30.736 NVM Sets: Not Supported 00:07:30.736 Read Recovery Levels: Not Supported 00:07:30.736 Endurance Groups: Not Supported 00:07:30.736 Predictable Latency Mode: Not Supported 00:07:30.736 Traffic Based Keep ALive: Not Supported 00:07:30.736 Namespace Granularity: Not Supported 00:07:30.736 SQ Associations: Not Supported 00:07:30.736 UUID List: Not Supported 00:07:30.736 Multi-Domain Subsystem: Not Supported 00:07:30.736 Fixed Capacity Management: Not Supported 00:07:30.736 Variable Capacity Management: Not Supported 00:07:30.736 Delete Endurance Group: Not Supported 00:07:30.736 Delete NVM Set: Not Supported 00:07:30.736 Extended LBA Formats Supported: Supported 00:07:30.736 Flexible Data Placement Supported: Not Supported 00:07:30.736 00:07:30.736 Controller Memory Buffer Support 00:07:30.736 ================================ 00:07:30.736 Supported: No 00:07:30.736 00:07:30.736 Persistent Memory Region Support 00:07:30.736 ================================ 00:07:30.736 Supported: No 00:07:30.736 00:07:30.736 Admin Command Set Attributes 00:07:30.736 ============================ 00:07:30.736 Security Send/Receive: Not Supported 00:07:30.736 Format NVM: Supported 00:07:30.736 Firmware Activate/Download: Not Supported 00:07:30.736 Namespace Management: Supported 00:07:30.736 Device Self-Test: Not Supported 00:07:30.736 Directives: Supported 00:07:30.736 NVMe-MI: Not Supported 00:07:30.736 Virtualization Management: Not Supported 00:07:30.736 Doorbell Buffer Config: Supported 00:07:30.736 Get LBA Status Capability: Not Supported 00:07:30.736 Command & Feature Lockdown Capability: Not Supported 00:07:30.736 Abort Command Limit: 4 00:07:30.736 Async Event Request Limit: 4 00:07:30.736 Number of Firmware Slots: N/A 00:07:30.736 Firmware Slot 1 Read-Only: N/A 00:07:30.736 Firmware Activation Without Reset: N/A 00:07:30.736 Multiple Update Detection Support: N/A 00:07:30.736 Firmware Update Granularity: No Information Provided 00:07:30.736 Per-Namespace SMART Log: Yes 00:07:30.736 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.736 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:30.736 Command Effects Log Page: Supported 00:07:30.736 Get Log Page Extended Data: Supported 00:07:30.736 Telemetry Log Pages: Not Supported 00:07:30.736 Persistent Event Log Pages: Not Supported 00:07:30.736 Supported Log Pages Log Page: May Support 00:07:30.736 Commands Supported & Effects Log Page: Not Supported 00:07:30.736 Feature Identifiers & Effects Log Page:May Support 00:07:30.736 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.736 Data Area 4 for Telemetry Log: Not Supported 00:07:30.736 Error Log Page Entries Supported: 1 00:07:30.736 Keep Alive: Not Supported 00:07:30.736 00:07:30.736 NVM Command Set Attributes 00:07:30.736 ========================== 00:07:30.736 Submission Queue Entry Size 00:07:30.736 Max: 64 00:07:30.736 Min: 64 00:07:30.736 Completion Queue Entry Size 00:07:30.736 Max: 16 00:07:30.736 Min: 16 00:07:30.736 Number of Namespaces: 256 00:07:30.736 Compare Command: Supported 00:07:30.736 Write Uncorrectable Command: Not Supported 00:07:30.736 Dataset Management Command: Supported 00:07:30.736 Write Zeroes Command: Supported 00:07:30.736 Set Features Save Field: Supported 00:07:30.736 Reservations: Not Supported 00:07:30.736 Timestamp: Supported 00:07:30.736 Copy: Supported 00:07:30.736 Volatile Write Cache: Present 00:07:30.736 Atomic Write Unit (Normal): 1 00:07:30.736 Atomic Write Unit (PFail): 1 00:07:30.736 Atomic Compare & Write Unit: 1 00:07:30.736 Fused Compare & Write: Not Supported 00:07:30.737 Scatter-Gather List 00:07:30.737 SGL Command Set: Supported 00:07:30.737 SGL Keyed: Not Supported 00:07:30.737 SGL Bit Bucket Descriptor: Not Supported 00:07:30.737 SGL Metadata Pointer: Not Supported 00:07:30.737 Oversized SGL: Not Supported 00:07:30.737 SGL Metadata Address: Not Supported 00:07:30.737 SGL Offset: Not Supported 00:07:30.737 Transport SGL Data Block: Not Supported 00:07:30.737 Replay Protected Memory Block: Not Supported 00:07:30.737 00:07:30.737 Firmware Slot Information 00:07:30.737 ========================= 00:07:30.737 Active slot: 1 00:07:30.737 Slot 1 Firmware Revision: 1.0 00:07:30.737 00:07:30.737 00:07:30.737 Commands Supported and Effects 00:07:30.737 ============================== 00:07:30.737 Admin Commands 00:07:30.737 -------------- 00:07:30.737 Delete I/O Submission Queue (00h): Supported 00:07:30.737 Create I/O Submission Queue (01h): Supported 00:07:30.737 Get Log Page (02h): Supported 00:07:30.737 Delete I/O Completion Queue (04h): Supported 00:07:30.737 Create I/O Completion Queue (05h): Supported 00:07:30.737 Identify (06h): Supported 00:07:30.737 Abort (08h): Supported 00:07:30.737 Set Features (09h): Supported 00:07:30.737 Get Features (0Ah): Supported 00:07:30.737 Asynchronous Event Request (0Ch): Supported 00:07:30.737 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.737 Directive Send (19h): Supported 00:07:30.737 Directive Receive (1Ah): Supported 00:07:30.737 Virtualization Management (1Ch): Supported 00:07:30.737 Doorbell Buffer Config (7Ch): Supported 00:07:30.737 Format NVM (80h): Supported LBA-Change 00:07:30.737 I/O Commands 00:07:30.737 ------------ 00:07:30.737 Flush (00h): Supported LBA-Change 00:07:30.737 Write (01h): Supported LBA-Change 00:07:30.737 Read (02h): Supported 00:07:30.737 Compare (05h): Supported 00:07:30.737 Write Zeroes (08h): Supported LBA-Change 00:07:30.737 Dataset Management (09h): Supported LBA-Change 00:07:30.737 Unknown (0Ch): Supported 00:07:30.737 Unknown (12h): Supported 00:07:30.737 Copy (19h): Supported LBA-Change 00:07:30.737 Unknown (1Dh): Supported LBA-Change 00:07:30.737 00:07:30.737 Error Log 00:07:30.737 ========= 00:07:30.737 00:07:30.737 Arbitration 00:07:30.737 =========== 00:07:30.737 Arbitration Burst: no limit 00:07:30.737 00:07:30.737 Power Management 00:07:30.737 ================ 00:07:30.737 Number of Power States: 1 00:07:30.737 Current Power State: Power State #0 00:07:30.737 Power State #0: 00:07:30.737 Max Power: 25.00 W 00:07:30.737 Non-Operational State: Operational 00:07:30.737 Entry Latency: 16 microseconds 00:07:30.737 Exit Latency: 4 microseconds 00:07:30.737 Relative Read Throughput: 0 00:07:30.737 Relative Read Latency: 0 00:07:30.737 Relative Write Throughput: 0 00:07:30.737 Relative Write Latency: 0 00:07:30.737 Idle Power[2024-11-19 23:20:16.854371] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74650 terminated unexpected 00:07:30.737 : Not Reported 00:07:30.737 Active Power: Not Reported 00:07:30.737 Non-Operational Permissive Mode: Not Supported 00:07:30.737 00:07:30.737 Health Information 00:07:30.737 ================== 00:07:30.737 Critical Warnings: 00:07:30.737 Available Spare Space: OK 00:07:30.737 Temperature: OK 00:07:30.737 Device Reliability: OK 00:07:30.737 Read Only: No 00:07:30.737 Volatile Memory Backup: OK 00:07:30.737 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.737 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.737 Available Spare: 0% 00:07:30.737 Available Spare Threshold: 0% 00:07:30.737 Life Percentage Used: 0% 00:07:30.737 Data Units Read: 704 00:07:30.737 Data Units Written: 632 00:07:30.737 Host Read Commands: 36720 00:07:30.737 Host Write Commands: 36506 00:07:30.737 Controller Busy Time: 0 minutes 00:07:30.737 Power Cycles: 0 00:07:30.737 Power On Hours: 0 hours 00:07:30.737 Unsafe Shutdowns: 0 00:07:30.737 Unrecoverable Media Errors: 0 00:07:30.737 Lifetime Error Log Entries: 0 00:07:30.737 Warning Temperature Time: 0 minutes 00:07:30.737 Critical Temperature Time: 0 minutes 00:07:30.737 00:07:30.737 Number of Queues 00:07:30.737 ================ 00:07:30.737 Number of I/O Submission Queues: 64 00:07:30.737 Number of I/O Completion Queues: 64 00:07:30.737 00:07:30.737 ZNS Specific Controller Data 00:07:30.737 ============================ 00:07:30.737 Zone Append Size Limit: 0 00:07:30.737 00:07:30.737 00:07:30.737 Active Namespaces 00:07:30.737 ================= 00:07:30.737 Namespace ID:1 00:07:30.737 Error Recovery Timeout: Unlimited 00:07:30.737 Command Set Identifier: NVM (00h) 00:07:30.737 Deallocate: Supported 00:07:30.737 Deallocated/Unwritten Error: Supported 00:07:30.737 Deallocated Read Value: All 0x00 00:07:30.737 Deallocate in Write Zeroes: Not Supported 00:07:30.737 Deallocated Guard Field: 0xFFFF 00:07:30.737 Flush: Supported 00:07:30.737 Reservation: Not Supported 00:07:30.737 Metadata Transferred as: Separate Metadata Buffer 00:07:30.737 Namespace Sharing Capabilities: Private 00:07:30.737 Size (in LBAs): 1548666 (5GiB) 00:07:30.737 Capacity (in LBAs): 1548666 (5GiB) 00:07:30.737 Utilization (in LBAs): 1548666 (5GiB) 00:07:30.737 Thin Provisioning: Not Supported 00:07:30.737 Per-NS Atomic Units: No 00:07:30.737 Maximum Single Source Range Length: 128 00:07:30.737 Maximum Copy Length: 128 00:07:30.737 Maximum Source Range Count: 128 00:07:30.737 NGUID/EUI64 Never Reused: No 00:07:30.737 Namespace Write Protected: No 00:07:30.737 Number of LBA Formats: 8 00:07:30.737 Current LBA Format: LBA Format #07 00:07:30.737 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.737 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.737 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.737 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.737 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.737 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.737 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.737 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.737 00:07:30.737 NVM Specific Namespace Data 00:07:30.737 =========================== 00:07:30.737 Logical Block Storage Tag Mask: 0 00:07:30.737 Protection Information Capabilities: 00:07:30.737 16b Guard Protection Information Storage Tag Support: No 00:07:30.737 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.737 Storage Tag Check Read Support: No 00:07:30.737 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.737 ===================================================== 00:07:30.737 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:30.737 ===================================================== 00:07:30.737 Controller Capabilities/Features 00:07:30.737 ================================ 00:07:30.737 Vendor ID: 1b36 00:07:30.737 Subsystem Vendor ID: 1af4 00:07:30.737 Serial Number: 12341 00:07:30.737 Model Number: QEMU NVMe Ctrl 00:07:30.737 Firmware Version: 8.0.0 00:07:30.737 Recommended Arb Burst: 6 00:07:30.737 IEEE OUI Identifier: 00 54 52 00:07:30.737 Multi-path I/O 00:07:30.737 May have multiple subsystem ports: No 00:07:30.737 May have multiple controllers: No 00:07:30.737 Associated with SR-IOV VF: No 00:07:30.737 Max Data Transfer Size: 524288 00:07:30.737 Max Number of Namespaces: 256 00:07:30.737 Max Number of I/O Queues: 64 00:07:30.737 NVMe Specification Version (VS): 1.4 00:07:30.737 NVMe Specification Version (Identify): 1.4 00:07:30.737 Maximum Queue Entries: 2048 00:07:30.737 Contiguous Queues Required: Yes 00:07:30.737 Arbitration Mechanisms Supported 00:07:30.737 Weighted Round Robin: Not Supported 00:07:30.737 Vendor Specific: Not Supported 00:07:30.737 Reset Timeout: 7500 ms 00:07:30.737 Doorbell Stride: 4 bytes 00:07:30.737 NVM Subsystem Reset: Not Supported 00:07:30.737 Command Sets Supported 00:07:30.737 NVM Command Set: Supported 00:07:30.737 Boot Partition: Not Supported 00:07:30.737 Memory Page Size Minimum: 4096 bytes 00:07:30.737 Memory Page Size Maximum: 65536 bytes 00:07:30.737 Persistent Memory Region: Not Supported 00:07:30.737 Optional Asynchronous Events Supported 00:07:30.737 Namespace Attribute Notices: Supported 00:07:30.737 Firmware Activation Notices: Not Supported 00:07:30.737 ANA Change Notices: Not Supported 00:07:30.737 PLE Aggregate Log Change Notices: Not Supported 00:07:30.737 LBA Status Info Alert Notices: Not Supported 00:07:30.737 EGE Aggregate Log Change Notices: Not Supported 00:07:30.737 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.737 Zone Descriptor Change Notices: Not Supported 00:07:30.737 Discovery Log Change Notices: Not Supported 00:07:30.738 Controller Attributes 00:07:30.738 128-bit Host Identifier: Not Supported 00:07:30.738 Non-Operational Permissive Mode: Not Supported 00:07:30.738 NVM Sets: Not Supported 00:07:30.738 Read Recovery Levels: Not Supported 00:07:30.738 Endurance Groups: Not Supported 00:07:30.738 Predictable Latency Mode: Not Supported 00:07:30.738 Traffic Based Keep ALive: Not Supported 00:07:30.738 Namespace Granularity: Not Supported 00:07:30.738 SQ Associations: Not Supported 00:07:30.738 UUID List: Not Supported 00:07:30.738 Multi-Domain Subsystem: Not Supported 00:07:30.738 Fixed Capacity Management: Not Supported 00:07:30.738 Variable Capacity Management: Not Supported 00:07:30.738 Delete Endurance Group: Not Supported 00:07:30.738 Delete NVM Set: Not Supported 00:07:30.738 Extended LBA Formats Supported: Supported 00:07:30.738 Flexible Data Placement Supported: Not Supported 00:07:30.738 00:07:30.738 Controller Memory Buffer Support 00:07:30.738 ================================ 00:07:30.738 Supported: No 00:07:30.738 00:07:30.738 Persistent Memory Region Support 00:07:30.738 ================================ 00:07:30.738 Supported: No 00:07:30.738 00:07:30.738 Admin Command Set Attributes 00:07:30.738 ============================ 00:07:30.738 Security Send/Receive: Not Supported 00:07:30.738 Format NVM: Supported 00:07:30.738 Firmware Activate/Download: Not Supported 00:07:30.738 Namespace Management: Supported 00:07:30.738 Device Self-Test: Not Supported 00:07:30.738 Directives: Supported 00:07:30.738 NVMe-MI: Not Supported 00:07:30.738 Virtualization Management: Not Supported 00:07:30.738 Doorbell Buffer Config: Supported 00:07:30.738 Get LBA Status Capability: Not Supported 00:07:30.738 Command & Feature Lockdown Capability: Not Supported 00:07:30.738 Abort Command Limit: 4 00:07:30.738 Async Event Request Limit: 4 00:07:30.738 Number of Firmware Slots: N/A 00:07:30.738 Firmware Slot 1 Read-Only: N/A 00:07:30.738 Firmware Activation Without Reset: N/A 00:07:30.738 Multiple Update Detection Support: N/A 00:07:30.738 Firmware Update Granularity: No Information Provided 00:07:30.738 Per-Namespace SMART Log: Yes 00:07:30.738 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.738 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:30.738 Command Effects Log Page: Supported 00:07:30.738 Get Log Page Extended Data: Supported 00:07:30.738 Telemetry Log Pages: Not Supported 00:07:30.738 Persistent Event Log Pages: Not Supported 00:07:30.738 Supported Log Pages Log Page: May Support 00:07:30.738 Commands Supported & Effects Log Page: Not Supported 00:07:30.738 Feature Identifiers & Effects Log Page:May Support 00:07:30.738 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.738 Data Area 4 for Telemetry Log: Not Supported 00:07:30.738 Error Log Page Entries Supported: 1 00:07:30.738 Keep Alive: Not Supported 00:07:30.738 00:07:30.738 NVM Command Set Attributes 00:07:30.738 ========================== 00:07:30.738 Submission Queue Entry Size 00:07:30.738 Max: 64 00:07:30.738 Min: 64 00:07:30.738 Completion Queue Entry Size 00:07:30.738 Max: 16 00:07:30.738 Min: 16 00:07:30.738 Number of Namespaces: 256 00:07:30.738 Compare Command: Supported 00:07:30.738 Write Uncorrectable Command: Not Supported 00:07:30.738 Dataset Management Command: Supported 00:07:30.738 Write Zeroes Command: Supported 00:07:30.738 Set Features Save Field: Supported 00:07:30.738 Reservations: Not Supported 00:07:30.738 Timestamp: Supported 00:07:30.738 Copy: Supported 00:07:30.738 Volatile Write Cache: Present 00:07:30.738 Atomic Write Unit (Normal): 1 00:07:30.738 Atomic Write Unit (PFail): 1 00:07:30.738 Atomic Compare & Write Unit: 1 00:07:30.738 Fused Compare & Write: Not Supported 00:07:30.738 Scatter-Gather List 00:07:30.738 SGL Command Set: Supported 00:07:30.738 SGL Keyed: Not Supported 00:07:30.738 SGL Bit Bucket Descriptor: Not Supported 00:07:30.738 SGL Metadata Pointer: Not Supported 00:07:30.738 Oversized SGL: Not Supported 00:07:30.738 SGL Metadata Address: Not Supported 00:07:30.738 SGL Offset: Not Supported 00:07:30.738 Transport SGL Data Block: Not Supported 00:07:30.738 Replay Protected Memory Block: Not Supported 00:07:30.738 00:07:30.738 Firmware Slot Information 00:07:30.738 ========================= 00:07:30.738 Active slot: 1 00:07:30.738 Slot 1 Firmware Revision: 1.0 00:07:30.738 00:07:30.738 00:07:30.738 Commands Supported and Effects 00:07:30.738 ============================== 00:07:30.738 Admin Commands 00:07:30.738 -------------- 00:07:30.738 Delete I/O Submission Queue (00h): Supported 00:07:30.738 Create I/O Submission Queue (01h): Supported 00:07:30.738 Get Log Page (02h): Supported 00:07:30.738 Delete I/O Completion Queue (04h): Supported 00:07:30.738 Create I/O Completion Queue (05h): Supported 00:07:30.738 Identify (06h): Supported 00:07:30.738 Abort (08h): Supported 00:07:30.738 Set Features (09h): Supported 00:07:30.738 Get Features (0Ah): Supported 00:07:30.738 Asynchronous Event Request (0Ch): Supported 00:07:30.738 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.738 Directive Send (19h): Supported 00:07:30.738 Directive Receive (1Ah): Supported 00:07:30.738 Virtualization Management (1Ch): Supported 00:07:30.738 Doorbell Buffer Config (7Ch): Supported 00:07:30.738 Format NVM (80h): Supported LBA-Change 00:07:30.738 I/O Commands 00:07:30.738 ------------ 00:07:30.738 Flush (00h): Supported LBA-Change 00:07:30.738 Write (01h): Supported LBA-Change 00:07:30.738 Read (02h): Supported 00:07:30.738 Compare (05h): Supported 00:07:30.738 Write Zeroes (08h): Supported LBA-Change 00:07:30.738 Dataset Management (09h): Supported LBA-Change 00:07:30.738 Unknown (0Ch): Supported 00:07:30.738 Unknown (12h): Supported 00:07:30.738 Copy (19h): Supported LBA-Change 00:07:30.738 Unknown (1Dh): Supported LBA-Change 00:07:30.738 00:07:30.738 Error Log 00:07:30.738 ========= 00:07:30.738 00:07:30.738 Arbitration 00:07:30.738 =========== 00:07:30.738 Arbitration Burst: no limit 00:07:30.738 00:07:30.738 Power Management 00:07:30.738 ================ 00:07:30.738 Number of Power States: 1 00:07:30.738 Current Power State: Power State #0 00:07:30.738 Power State #0: 00:07:30.738 Max Power: 25.00 W 00:07:30.738 Non-Operational State: Operational 00:07:30.738 Entry Latency: 16 microseconds 00:07:30.738 Exit Latency: 4 microseconds 00:07:30.738 Relative Read Throughput: 0 00:07:30.738 Relative Read Latency: 0 00:07:30.738 Relative Write Throughput: 0 00:07:30.738 Relative Write Latency: 0 00:07:30.738 Idle Power: Not Reported 00:07:30.738 Active Power: Not Reported 00:07:30.738 Non-Operational Permissive Mode: Not Supported 00:07:30.738 00:07:30.738 Health Information 00:07:30.738 ================== 00:07:30.738 Critical Warnings: 00:07:30.738 Available Spare Space: OK 00:07:30.738 Temperature: [2024-11-19 23:20:16.855084] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74650 terminated unexpected 00:07:30.738 OK 00:07:30.738 Device Reliability: OK 00:07:30.738 Read Only: No 00:07:30.738 Volatile Memory Backup: OK 00:07:30.738 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.738 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.738 Available Spare: 0% 00:07:30.738 Available Spare Threshold: 0% 00:07:30.738 Life Percentage Used: 0% 00:07:30.738 Data Units Read: 1073 00:07:30.738 Data Units Written: 934 00:07:30.738 Host Read Commands: 53944 00:07:30.738 Host Write Commands: 52610 00:07:30.738 Controller Busy Time: 0 minutes 00:07:30.738 Power Cycles: 0 00:07:30.738 Power On Hours: 0 hours 00:07:30.738 Unsafe Shutdowns: 0 00:07:30.738 Unrecoverable Media Errors: 0 00:07:30.738 Lifetime Error Log Entries: 0 00:07:30.738 Warning Temperature Time: 0 minutes 00:07:30.738 Critical Temperature Time: 0 minutes 00:07:30.738 00:07:30.738 Number of Queues 00:07:30.738 ================ 00:07:30.738 Number of I/O Submission Queues: 64 00:07:30.738 Number of I/O Completion Queues: 64 00:07:30.738 00:07:30.738 ZNS Specific Controller Data 00:07:30.738 ============================ 00:07:30.738 Zone Append Size Limit: 0 00:07:30.738 00:07:30.738 00:07:30.738 Active Namespaces 00:07:30.738 ================= 00:07:30.738 Namespace ID:1 00:07:30.738 Error Recovery Timeout: Unlimited 00:07:30.738 Command Set Identifier: NVM (00h) 00:07:30.738 Deallocate: Supported 00:07:30.738 Deallocated/Unwritten Error: Supported 00:07:30.738 Deallocated Read Value: All 0x00 00:07:30.738 Deallocate in Write Zeroes: Not Supported 00:07:30.738 Deallocated Guard Field: 0xFFFF 00:07:30.738 Flush: Supported 00:07:30.739 Reservation: Not Supported 00:07:30.739 Namespace Sharing Capabilities: Private 00:07:30.739 Size (in LBAs): 1310720 (5GiB) 00:07:30.739 Capacity (in LBAs): 1310720 (5GiB) 00:07:30.739 Utilization (in LBAs): 1310720 (5GiB) 00:07:30.739 Thin Provisioning: Not Supported 00:07:30.739 Per-NS Atomic Units: No 00:07:30.739 Maximum Single Source Range Length: 128 00:07:30.739 Maximum Copy Length: 128 00:07:30.739 Maximum Source Range Count: 128 00:07:30.739 NGUID/EUI64 Never Reused: No 00:07:30.739 Namespace Write Protected: No 00:07:30.739 Number of LBA Formats: 8 00:07:30.739 Current LBA Format: LBA Format #04 00:07:30.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.739 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.739 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.739 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.739 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.739 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.739 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.739 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.739 00:07:30.739 NVM Specific Namespace Data 00:07:30.739 =========================== 00:07:30.739 Logical Block Storage Tag Mask: 0 00:07:30.739 Protection Information Capabilities: 00:07:30.739 16b Guard Protection Information Storage Tag Support: No 00:07:30.739 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.739 Storage Tag Check Read Support: No 00:07:30.739 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.739 ===================================================== 00:07:30.739 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:30.739 ===================================================== 00:07:30.739 Controller Capabilities/Features 00:07:30.739 ================================ 00:07:30.739 Vendor ID: 1b36 00:07:30.739 Subsystem Vendor ID: 1af4 00:07:30.739 Serial Number: 12343 00:07:30.739 Model Number: QEMU NVMe Ctrl 00:07:30.739 Firmware Version: 8.0.0 00:07:30.739 Recommended Arb Burst: 6 00:07:30.739 IEEE OUI Identifier: 00 54 52 00:07:30.739 Multi-path I/O 00:07:30.739 May have multiple subsystem ports: No 00:07:30.739 May have multiple controllers: Yes 00:07:30.739 Associated with SR-IOV VF: No 00:07:30.739 Max Data Transfer Size: 524288 00:07:30.739 Max Number of Namespaces: 256 00:07:30.739 Max Number of I/O Queues: 64 00:07:30.739 NVMe Specification Version (VS): 1.4 00:07:30.739 NVMe Specification Version (Identify): 1.4 00:07:30.739 Maximum Queue Entries: 2048 00:07:30.739 Contiguous Queues Required: Yes 00:07:30.739 Arbitration Mechanisms Supported 00:07:30.739 Weighted Round Robin: Not Supported 00:07:30.739 Vendor Specific: Not Supported 00:07:30.739 Reset Timeout: 7500 ms 00:07:30.739 Doorbell Stride: 4 bytes 00:07:30.739 NVM Subsystem Reset: Not Supported 00:07:30.739 Command Sets Supported 00:07:30.739 NVM Command Set: Supported 00:07:30.739 Boot Partition: Not Supported 00:07:30.739 Memory Page Size Minimum: 4096 bytes 00:07:30.739 Memory Page Size Maximum: 65536 bytes 00:07:30.739 Persistent Memory Region: Not Supported 00:07:30.739 Optional Asynchronous Events Supported 00:07:30.739 Namespace Attribute Notices: Supported 00:07:30.739 Firmware Activation Notices: Not Supported 00:07:30.739 ANA Change Notices: Not Supported 00:07:30.739 PLE Aggregate Log Change Notices: Not Supported 00:07:30.739 LBA Status Info Alert Notices: Not Supported 00:07:30.739 EGE Aggregate Log Change Notices: Not Supported 00:07:30.739 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.739 Zone Descriptor Change Notices: Not Supported 00:07:30.739 Discovery Log Change Notices: Not Supported 00:07:30.739 Controller Attributes 00:07:30.739 128-bit Host Identifier: Not Supported 00:07:30.739 Non-Operational Permissive Mode: Not Supported 00:07:30.739 NVM Sets: Not Supported 00:07:30.739 Read Recovery Levels: Not Supported 00:07:30.739 Endurance Groups: Supported 00:07:30.739 Predictable Latency Mode: Not Supported 00:07:30.739 Traffic Based Keep ALive: Not Supported 00:07:30.739 Namespace Granularity: Not Supported 00:07:30.739 SQ Associations: Not Supported 00:07:30.739 UUID List: Not Supported 00:07:30.739 Multi-Domain Subsystem: Not Supported 00:07:30.739 Fixed Capacity Management: Not Supported 00:07:30.739 Variable Capacity Management: Not Supported 00:07:30.739 Delete Endurance Group: Not Supported 00:07:30.739 Delete NVM Set: Not Supported 00:07:30.739 Extended LBA Formats Supported: Supported 00:07:30.739 Flexible Data Placement Supported: Supported 00:07:30.739 00:07:30.739 Controller Memory Buffer Support 00:07:30.739 ================================ 00:07:30.739 Supported: No 00:07:30.739 00:07:30.739 Persistent Memory Region Support 00:07:30.739 ================================ 00:07:30.739 Supported: No 00:07:30.739 00:07:30.739 Admin Command Set Attributes 00:07:30.739 ============================ 00:07:30.739 Security Send/Receive: Not Supported 00:07:30.739 Format NVM: Supported 00:07:30.739 Firmware Activate/Download: Not Supported 00:07:30.739 Namespace Management: Supported 00:07:30.739 Device Self-Test: Not Supported 00:07:30.739 Directives: Supported 00:07:30.739 NVMe-MI: Not Supported 00:07:30.739 Virtualization Management: Not Supported 00:07:30.739 Doorbell Buffer Config: Supported 00:07:30.739 Get LBA Status Capability: Not Supported 00:07:30.739 Command & Feature Lockdown Capability: Not Supported 00:07:30.739 Abort Command Limit: 4 00:07:30.739 Async Event Request Limit: 4 00:07:30.739 Number of Firmware Slots: N/A 00:07:30.739 Firmware Slot 1 Read-Only: N/A 00:07:30.739 Firmware Activation Without Reset: N/A 00:07:30.739 Multiple Update Detection Support: N/A 00:07:30.739 Firmware Update Granularity: No Information Provided 00:07:30.739 Per-Namespace SMART Log: Yes 00:07:30.739 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.739 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:30.739 Command Effects Log Page: Supported 00:07:30.739 Get Log Page Extended Data: Supported 00:07:30.739 Telemetry Log Pages: Not Supported 00:07:30.739 Persistent Event Log Pages: Not Supported 00:07:30.739 Supported Log Pages Log Page: May Support 00:07:30.739 Commands Supported & Effects Log Page: Not Supported 00:07:30.739 Feature Identifiers & Effects Log Page:May Support 00:07:30.739 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.739 Data Area 4 for Telemetry Log: Not Supported 00:07:30.739 Error Log Page Entries Supported: 1 00:07:30.739 Keep Alive: Not Supported 00:07:30.739 00:07:30.739 NVM Command Set Attributes 00:07:30.739 ========================== 00:07:30.739 Submission Queue Entry Size 00:07:30.739 Max: 64 00:07:30.739 Min: 64 00:07:30.739 Completion Queue Entry Size 00:07:30.739 Max: 16 00:07:30.739 Min: 16 00:07:30.739 Number of Namespaces: 256 00:07:30.739 Compare Command: Supported 00:07:30.739 Write Uncorrectable Command: Not Supported 00:07:30.739 Dataset Management Command: Supported 00:07:30.739 Write Zeroes Command: Supported 00:07:30.739 Set Features Save Field: Supported 00:07:30.739 Reservations: Not Supported 00:07:30.739 Timestamp: Supported 00:07:30.739 Copy: Supported 00:07:30.739 Volatile Write Cache: Present 00:07:30.739 Atomic Write Unit (Normal): 1 00:07:30.739 Atomic Write Unit (PFail): 1 00:07:30.739 Atomic Compare & Write Unit: 1 00:07:30.739 Fused Compare & Write: Not Supported 00:07:30.739 Scatter-Gather List 00:07:30.739 SGL Command Set: Supported 00:07:30.739 SGL Keyed: Not Supported 00:07:30.739 SGL Bit Bucket Descriptor: Not Supported 00:07:30.739 SGL Metadata Pointer: Not Supported 00:07:30.739 Oversized SGL: Not Supported 00:07:30.739 SGL Metadata Address: Not Supported 00:07:30.739 SGL Offset: Not Supported 00:07:30.739 Transport SGL Data Block: Not Supported 00:07:30.739 Replay Protected Memory Block: Not Supported 00:07:30.739 00:07:30.739 Firmware Slot Information 00:07:30.739 ========================= 00:07:30.739 Active slot: 1 00:07:30.739 Slot 1 Firmware Revision: 1.0 00:07:30.739 00:07:30.739 00:07:30.739 Commands Supported and Effects 00:07:30.739 ============================== 00:07:30.740 Admin Commands 00:07:30.740 -------------- 00:07:30.740 Delete I/O Submission Queue (00h): Supported 00:07:30.740 Create I/O Submission Queue (01h): Supported 00:07:30.740 Get Log Page (02h): Supported 00:07:30.740 Delete I/O Completion Queue (04h): Supported 00:07:30.740 Create I/O Completion Queue (05h): Supported 00:07:30.740 Identify (06h): Supported 00:07:30.740 Abort (08h): Supported 00:07:30.740 Set Features (09h): Supported 00:07:30.740 Get Features (0Ah): Supported 00:07:30.740 Asynchronous Event Request (0Ch): Supported 00:07:30.740 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.740 Directive Send (19h): Supported 00:07:30.740 Directive Receive (1Ah): Supported 00:07:30.740 Virtualization Management (1Ch): Supported 00:07:30.740 Doorbell Buffer Config (7Ch): Supported 00:07:30.740 Format NVM (80h): Supported LBA-Change 00:07:30.740 I/O Commands 00:07:30.740 ------------ 00:07:30.740 Flush (00h): Supported LBA-Change 00:07:30.740 Write (01h): Supported LBA-Change 00:07:30.740 Read (02h): Supported 00:07:30.740 Compare (05h): Supported 00:07:30.740 Write Zeroes (08h): Supported LBA-Change 00:07:30.740 Dataset Management (09h): Supported LBA-Change 00:07:30.740 Unknown (0Ch): Supported 00:07:30.740 Unknown (12h): Supported 00:07:30.740 Copy (19h): Supported LBA-Change 00:07:30.740 Unknown (1Dh): Supported LBA-Change 00:07:30.740 00:07:30.740 Error Log 00:07:30.740 ========= 00:07:30.740 00:07:30.740 Arbitration 00:07:30.740 =========== 00:07:30.740 Arbitration Burst: no limit 00:07:30.740 00:07:30.740 Power Management 00:07:30.740 ================ 00:07:30.740 Number of Power States: 1 00:07:30.740 Current Power State: Power State #0 00:07:30.740 Power State #0: 00:07:30.740 Max Power: 25.00 W 00:07:30.740 Non-Operational State: Operational 00:07:30.740 Entry Latency: 16 microseconds 00:07:30.740 Exit Latency: 4 microseconds 00:07:30.740 Relative Read Throughput: 0 00:07:30.740 Relative Read Latency: 0 00:07:30.740 Relative Write Throughput: 0 00:07:30.740 Relative Write Latency: 0 00:07:30.740 Idle Power: Not Reported 00:07:30.740 Active Power: Not Reported 00:07:30.740 Non-Operational Permissive Mode: Not Supported 00:07:30.740 00:07:30.740 Health Information 00:07:30.740 ================== 00:07:30.740 Critical Warnings: 00:07:30.740 Available Spare Space: OK 00:07:30.740 Temperature: OK 00:07:30.740 Device Reliability: OK 00:07:30.740 Read Only: No 00:07:30.740 Volatile Memory Backup: OK 00:07:30.740 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.740 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.740 Available Spare: 0% 00:07:30.740 Available Spare Threshold: 0% 00:07:30.740 Life Percentage Used: 0% 00:07:30.740 Data Units Read: 853 00:07:30.740 Data Units Written: 783 00:07:30.740 Host Read Commands: 38125 00:07:30.740 Host Write Commands: 37548 00:07:30.740 Controller Busy Time: 0 minutes 00:07:30.740 Power Cycles: 0 00:07:30.740 Power On Hours: 0 hours 00:07:30.740 Unsafe Shutdowns: 0 00:07:30.740 Unrecoverable Media Errors: 0 00:07:30.740 Lifetime Error Log Entries: 0 00:07:30.740 Warning Temperature Time: 0 minutes 00:07:30.740 Critical Temperature Time: 0 minutes 00:07:30.740 00:07:30.740 Number of Queues 00:07:30.740 ================ 00:07:30.740 Number of I/O Submission Queues: 64 00:07:30.740 Number of I/O Completion Queues: 64 00:07:30.740 00:07:30.740 ZNS Specific Controller Data 00:07:30.740 ============================ 00:07:30.740 Zone Append Size Limit: 0 00:07:30.740 00:07:30.740 00:07:30.740 Active Namespaces 00:07:30.740 ================= 00:07:30.740 Namespace ID:1 00:07:30.740 Error Recovery Timeout: Unlimited 00:07:30.740 Command Set Identifier: NVM (00h) 00:07:30.740 Deallocate: Supported 00:07:30.740 Deallocated/Unwritten Error: Supported 00:07:30.740 Deallocated Read Value: All 0x00 00:07:30.740 Deallocate in Write Zeroes: Not Supported 00:07:30.740 Deallocated Guard Field: 0xFFFF 00:07:30.740 Flush: Supported 00:07:30.740 Reservation: Not Supported 00:07:30.740 Namespace Sharing Capabilities: Multiple Controllers 00:07:30.740 Size (in LBAs): 262144 (1GiB) 00:07:30.740 Capacity (in LBAs): 262144 (1GiB) 00:07:30.740 Utilization (in LBAs): 262144 (1GiB) 00:07:30.740 Thin Provisioning: Not Supported 00:07:30.740 Per-NS Atomic Units: No 00:07:30.740 Maximum Single Source Range Length: 128 00:07:30.740 Maximum Copy Length: 128 00:07:30.740 Maximum Source Range Count: 128 00:07:30.740 NGUID/EUI64 Never Reused: No 00:07:30.740 Namespace Write Protected: No 00:07:30.740 Endurance group ID: 1 00:07:30.740 Number of LBA Formats: 8 00:07:30.740 Current LBA Format: LBA Format #04 00:07:30.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.740 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.740 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.740 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.740 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.740 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.740 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.740 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.740 00:07:30.740 Get Feature FDP: 00:07:30.740 ================ 00:07:30.740 Enabled: Yes 00:07:30.740 FDP configuration index: 0 00:07:30.740 00:07:30.740 FDP configurations log page 00:07:30.740 =========================== 00:07:30.740 Number of FDP configurations: 1 00:07:30.740 Version: 0 00:07:30.740 Size: 112 00:07:30.740 FDP Configuration Descriptor: 0 00:07:30.740 Descriptor Size: 96 00:07:30.740 Reclaim Group Identifier format: 2 00:07:30.740 FDP Volatile Write Cache: Not Present 00:07:30.740 FDP Configuration: Valid 00:07:30.740 Vendor Specific Size: 0 00:07:30.740 Number of Reclaim Groups: 2 00:07:30.740 Number of Recalim Unit Handles: 8 00:07:30.740 Max Placement Identifiers: 128 00:07:30.740 Number of Namespaces Suppprted: 256 00:07:30.740 Reclaim unit Nominal Size: 6000000 bytes 00:07:30.740 Estimated Reclaim Unit Time Limit: Not Reported 00:07:30.740 RUH Desc #000: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #001: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #002: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #003: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #004: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #005: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #006: RUH Type: Initially Isolated 00:07:30.740 RUH Desc #007: RUH Type: Initially Isolated 00:07:30.740 00:07:30.740 FDP reclaim unit handle usage log page 00:07:30.740 ====================================== 00:07:30.740 Number of Reclaim Unit Handles: 8 00:07:30.740 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:30.740 RUH Usage Desc #001: RUH Attributes: Unused 00:07:30.740 RUH Usage Desc #002: RUH Attributes: Unused 00:07:30.740 RUH Usage Desc #003: RUH Attributes: Unused 00:07:30.740 RUH Usage Desc #004: RUH Attributes: Unused 00:07:30.740 RUH Usage Desc #005: RUH Attributes: Unused 00:07:30.740 RUH Usage Desc #006: RUH Attributes: Unused 00:07:30.740 RUH Usage Desc #007: RUH Attributes: Unused 00:07:30.740 00:07:30.740 FDP statistics log page 00:07:30.740 ======================= 00:07:30.740 Host bytes with metadata written: 481140736 00:07:30.740 Medi[2024-11-19 23:20:16.856476] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74650 terminated unexpected 00:07:30.740 a bytes with metadata written: 481193984 00:07:30.740 Media bytes erased: 0 00:07:30.740 00:07:30.740 FDP events log page 00:07:30.740 =================== 00:07:30.740 Number of FDP events: 0 00:07:30.740 00:07:30.740 NVM Specific Namespace Data 00:07:30.740 =========================== 00:07:30.740 Logical Block Storage Tag Mask: 0 00:07:30.740 Protection Information Capabilities: 00:07:30.740 16b Guard Protection Information Storage Tag Support: No 00:07:30.740 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.740 Storage Tag Check Read Support: No 00:07:30.740 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.740 ===================================================== 00:07:30.740 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:30.740 ===================================================== 00:07:30.740 Controller Capabilities/Features 00:07:30.740 ================================ 00:07:30.741 Vendor ID: 1b36 00:07:30.741 Subsystem Vendor ID: 1af4 00:07:30.741 Serial Number: 12342 00:07:30.741 Model Number: QEMU NVMe Ctrl 00:07:30.741 Firmware Version: 8.0.0 00:07:30.741 Recommended Arb Burst: 6 00:07:30.741 IEEE OUI Identifier: 00 54 52 00:07:30.741 Multi-path I/O 00:07:30.741 May have multiple subsystem ports: No 00:07:30.741 May have multiple controllers: No 00:07:30.741 Associated with SR-IOV VF: No 00:07:30.741 Max Data Transfer Size: 524288 00:07:30.741 Max Number of Namespaces: 256 00:07:30.741 Max Number of I/O Queues: 64 00:07:30.741 NVMe Specification Version (VS): 1.4 00:07:30.741 NVMe Specification Version (Identify): 1.4 00:07:30.741 Maximum Queue Entries: 2048 00:07:30.741 Contiguous Queues Required: Yes 00:07:30.741 Arbitration Mechanisms Supported 00:07:30.741 Weighted Round Robin: Not Supported 00:07:30.741 Vendor Specific: Not Supported 00:07:30.741 Reset Timeout: 7500 ms 00:07:30.741 Doorbell Stride: 4 bytes 00:07:30.741 NVM Subsystem Reset: Not Supported 00:07:30.741 Command Sets Supported 00:07:30.741 NVM Command Set: Supported 00:07:30.741 Boot Partition: Not Supported 00:07:30.741 Memory Page Size Minimum: 4096 bytes 00:07:30.741 Memory Page Size Maximum: 65536 bytes 00:07:30.741 Persistent Memory Region: Not Supported 00:07:30.741 Optional Asynchronous Events Supported 00:07:30.741 Namespace Attribute Notices: Supported 00:07:30.741 Firmware Activation Notices: Not Supported 00:07:30.741 ANA Change Notices: Not Supported 00:07:30.741 PLE Aggregate Log Change Notices: Not Supported 00:07:30.741 LBA Status Info Alert Notices: Not Supported 00:07:30.741 EGE Aggregate Log Change Notices: Not Supported 00:07:30.741 Normal NVM Subsystem Shutdown event: Not Supported 00:07:30.741 Zone Descriptor Change Notices: Not Supported 00:07:30.741 Discovery Log Change Notices: Not Supported 00:07:30.741 Controller Attributes 00:07:30.741 128-bit Host Identifier: Not Supported 00:07:30.741 Non-Operational Permissive Mode: Not Supported 00:07:30.741 NVM Sets: Not Supported 00:07:30.741 Read Recovery Levels: Not Supported 00:07:30.741 Endurance Groups: Not Supported 00:07:30.741 Predictable Latency Mode: Not Supported 00:07:30.741 Traffic Based Keep ALive: Not Supported 00:07:30.741 Namespace Granularity: Not Supported 00:07:30.741 SQ Associations: Not Supported 00:07:30.741 UUID List: Not Supported 00:07:30.741 Multi-Domain Subsystem: Not Supported 00:07:30.741 Fixed Capacity Management: Not Supported 00:07:30.741 Variable Capacity Management: Not Supported 00:07:30.741 Delete Endurance Group: Not Supported 00:07:30.741 Delete NVM Set: Not Supported 00:07:30.741 Extended LBA Formats Supported: Supported 00:07:30.741 Flexible Data Placement Supported: Not Supported 00:07:30.741 00:07:30.741 Controller Memory Buffer Support 00:07:30.741 ================================ 00:07:30.741 Supported: No 00:07:30.741 00:07:30.741 Persistent Memory Region Support 00:07:30.741 ================================ 00:07:30.741 Supported: No 00:07:30.741 00:07:30.741 Admin Command Set Attributes 00:07:30.741 ============================ 00:07:30.741 Security Send/Receive: Not Supported 00:07:30.741 Format NVM: Supported 00:07:30.741 Firmware Activate/Download: Not Supported 00:07:30.741 Namespace Management: Supported 00:07:30.741 Device Self-Test: Not Supported 00:07:30.741 Directives: Supported 00:07:30.741 NVMe-MI: Not Supported 00:07:30.741 Virtualization Management: Not Supported 00:07:30.741 Doorbell Buffer Config: Supported 00:07:30.741 Get LBA Status Capability: Not Supported 00:07:30.741 Command & Feature Lockdown Capability: Not Supported 00:07:30.741 Abort Command Limit: 4 00:07:30.741 Async Event Request Limit: 4 00:07:30.741 Number of Firmware Slots: N/A 00:07:30.741 Firmware Slot 1 Read-Only: N/A 00:07:30.741 Firmware Activation Without Reset: N/A 00:07:30.741 Multiple Update Detection Support: N/A 00:07:30.741 Firmware Update Granularity: No Information Provided 00:07:30.741 Per-Namespace SMART Log: Yes 00:07:30.741 Asymmetric Namespace Access Log Page: Not Supported 00:07:30.741 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:30.741 Command Effects Log Page: Supported 00:07:30.741 Get Log Page Extended Data: Supported 00:07:30.741 Telemetry Log Pages: Not Supported 00:07:30.741 Persistent Event Log Pages: Not Supported 00:07:30.741 Supported Log Pages Log Page: May Support 00:07:30.741 Commands Supported & Effects Log Page: Not Supported 00:07:30.741 Feature Identifiers & Effects Log Page:May Support 00:07:30.741 NVMe-MI Commands & Effects Log Page: May Support 00:07:30.741 Data Area 4 for Telemetry Log: Not Supported 00:07:30.741 Error Log Page Entries Supported: 1 00:07:30.741 Keep Alive: Not Supported 00:07:30.741 00:07:30.741 NVM Command Set Attributes 00:07:30.741 ========================== 00:07:30.741 Submission Queue Entry Size 00:07:30.741 Max: 64 00:07:30.741 Min: 64 00:07:30.741 Completion Queue Entry Size 00:07:30.741 Max: 16 00:07:30.741 Min: 16 00:07:30.741 Number of Namespaces: 256 00:07:30.741 Compare Command: Supported 00:07:30.741 Write Uncorrectable Command: Not Supported 00:07:30.741 Dataset Management Command: Supported 00:07:30.741 Write Zeroes Command: Supported 00:07:30.741 Set Features Save Field: Supported 00:07:30.741 Reservations: Not Supported 00:07:30.741 Timestamp: Supported 00:07:30.741 Copy: Supported 00:07:30.741 Volatile Write Cache: Present 00:07:30.741 Atomic Write Unit (Normal): 1 00:07:30.741 Atomic Write Unit (PFail): 1 00:07:30.741 Atomic Compare & Write Unit: 1 00:07:30.741 Fused Compare & Write: Not Supported 00:07:30.741 Scatter-Gather List 00:07:30.741 SGL Command Set: Supported 00:07:30.741 SGL Keyed: Not Supported 00:07:30.741 SGL Bit Bucket Descriptor: Not Supported 00:07:30.741 SGL Metadata Pointer: Not Supported 00:07:30.741 Oversized SGL: Not Supported 00:07:30.741 SGL Metadata Address: Not Supported 00:07:30.741 SGL Offset: Not Supported 00:07:30.741 Transport SGL Data Block: Not Supported 00:07:30.741 Replay Protected Memory Block: Not Supported 00:07:30.741 00:07:30.741 Firmware Slot Information 00:07:30.741 ========================= 00:07:30.741 Active slot: 1 00:07:30.741 Slot 1 Firmware Revision: 1.0 00:07:30.741 00:07:30.741 00:07:30.741 Commands Supported and Effects 00:07:30.741 ============================== 00:07:30.741 Admin Commands 00:07:30.741 -------------- 00:07:30.741 Delete I/O Submission Queue (00h): Supported 00:07:30.741 Create I/O Submission Queue (01h): Supported 00:07:30.741 Get Log Page (02h): Supported 00:07:30.741 Delete I/O Completion Queue (04h): Supported 00:07:30.741 Create I/O Completion Queue (05h): Supported 00:07:30.741 Identify (06h): Supported 00:07:30.741 Abort (08h): Supported 00:07:30.741 Set Features (09h): Supported 00:07:30.741 Get Features (0Ah): Supported 00:07:30.741 Asynchronous Event Request (0Ch): Supported 00:07:30.741 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:30.741 Directive Send (19h): Supported 00:07:30.741 Directive Receive (1Ah): Supported 00:07:30.741 Virtualization Management (1Ch): Supported 00:07:30.741 Doorbell Buffer Config (7Ch): Supported 00:07:30.741 Format NVM (80h): Supported LBA-Change 00:07:30.741 I/O Commands 00:07:30.741 ------------ 00:07:30.741 Flush (00h): Supported LBA-Change 00:07:30.741 Write (01h): Supported LBA-Change 00:07:30.741 Read (02h): Supported 00:07:30.741 Compare (05h): Supported 00:07:30.741 Write Zeroes (08h): Supported LBA-Change 00:07:30.742 Dataset Management (09h): Supported LBA-Change 00:07:30.742 Unknown (0Ch): Supported 00:07:30.742 Unknown (12h): Supported 00:07:30.742 Copy (19h): Supported LBA-Change 00:07:30.742 Unknown (1Dh): Supported LBA-Change 00:07:30.742 00:07:30.742 Error Log 00:07:30.742 ========= 00:07:30.742 00:07:30.742 Arbitration 00:07:30.742 =========== 00:07:30.742 Arbitration Burst: no limit 00:07:30.742 00:07:30.742 Power Management 00:07:30.742 ================ 00:07:30.742 Number of Power States: 1 00:07:30.742 Current Power State: Power State #0 00:07:30.742 Power State #0: 00:07:30.742 Max Power: 25.00 W 00:07:30.742 Non-Operational State: Operational 00:07:30.742 Entry Latency: 16 microseconds 00:07:30.742 Exit Latency: 4 microseconds 00:07:30.742 Relative Read Throughput: 0 00:07:30.742 Relative Read Latency: 0 00:07:30.742 Relative Write Throughput: 0 00:07:30.742 Relative Write Latency: 0 00:07:30.742 Idle Power: Not Reported 00:07:30.742 Active Power: Not Reported 00:07:30.742 Non-Operational Permissive Mode: Not Supported 00:07:30.742 00:07:30.742 Health Information 00:07:30.742 ================== 00:07:30.742 Critical Warnings: 00:07:30.742 Available Spare Space: OK 00:07:30.742 Temperature: OK 00:07:30.742 Device Reliability: OK 00:07:30.742 Read Only: No 00:07:30.742 Volatile Memory Backup: OK 00:07:30.742 Current Temperature: 323 Kelvin (50 Celsius) 00:07:30.742 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:30.742 Available Spare: 0% 00:07:30.742 Available Spare Threshold: 0% 00:07:30.742 Life Percentage Used: 0% 00:07:30.742 Data Units Read: 2249 00:07:30.742 Data Units Written: 2036 00:07:30.742 Host Read Commands: 111410 00:07:30.742 Host Write Commands: 109679 00:07:30.742 Controller Busy Time: 0 minutes 00:07:30.742 Power Cycles: 0 00:07:30.742 Power On Hours: 0 hours 00:07:30.742 Unsafe Shutdowns: 0 00:07:30.742 Unrecoverable Media Errors: 0 00:07:30.742 Lifetime Error Log Entries: 0 00:07:30.742 Warning Temperature Time: 0 minutes 00:07:30.742 Critical Temperature Time: 0 minutes 00:07:30.742 00:07:30.742 Number of Queues 00:07:30.742 ================ 00:07:30.742 Number of I/O Submission Queues: 64 00:07:30.742 Number of I/O Completion Queues: 64 00:07:30.742 00:07:30.742 ZNS Specific Controller Data 00:07:30.742 ============================ 00:07:30.742 Zone Append Size Limit: 0 00:07:30.742 00:07:30.742 00:07:30.742 Active Namespaces 00:07:30.742 ================= 00:07:30.742 Namespace ID:1 00:07:30.742 Error Recovery Timeout: Unlimited 00:07:30.742 Command Set Identifier: NVM (00h) 00:07:30.742 Deallocate: Supported 00:07:30.742 Deallocated/Unwritten Error: Supported 00:07:30.742 Deallocated Read Value: All 0x00 00:07:30.742 Deallocate in Write Zeroes: Not Supported 00:07:30.742 Deallocated Guard Field: 0xFFFF 00:07:30.742 Flush: Supported 00:07:30.742 Reservation: Not Supported 00:07:30.742 Namespace Sharing Capabilities: Private 00:07:30.742 Size (in LBAs): 1048576 (4GiB) 00:07:30.742 Capacity (in LBAs): 1048576 (4GiB) 00:07:30.742 Utilization (in LBAs): 1048576 (4GiB) 00:07:30.742 Thin Provisioning: Not Supported 00:07:30.742 Per-NS Atomic Units: No 00:07:30.742 Maximum Single Source Range Length: 128 00:07:30.742 Maximum Copy Length: 128 00:07:30.742 Maximum Source Range Count: 128 00:07:30.742 NGUID/EUI64 Never Reused: No 00:07:30.742 Namespace Write Protected: No 00:07:30.742 Number of LBA Formats: 8 00:07:30.742 Current LBA Format: LBA Format #04 00:07:30.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.742 00:07:30.742 NVM Specific Namespace Data 00:07:30.742 =========================== 00:07:30.742 Logical Block Storage Tag Mask: 0 00:07:30.742 Protection Information Capabilities: 00:07:30.742 16b Guard Protection Information Storage Tag Support: No 00:07:30.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.742 Storage Tag Check Read Support: No 00:07:30.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Namespace ID:2 00:07:30.742 Error Recovery Timeout: Unlimited 00:07:30.742 Command Set Identifier: NVM (00h) 00:07:30.742 Deallocate: Supported 00:07:30.742 Deallocated/Unwritten Error: Supported 00:07:30.742 Deallocated Read Value: All 0x00 00:07:30.742 Deallocate in Write Zeroes: Not Supported 00:07:30.742 Deallocated Guard Field: 0xFFFF 00:07:30.742 Flush: Supported 00:07:30.742 Reservation: Not Supported 00:07:30.742 Namespace Sharing Capabilities: Private 00:07:30.742 Size (in LBAs): 1048576 (4GiB) 00:07:30.742 Capacity (in LBAs): 1048576 (4GiB) 00:07:30.742 Utilization (in LBAs): 1048576 (4GiB) 00:07:30.742 Thin Provisioning: Not Supported 00:07:30.742 Per-NS Atomic Units: No 00:07:30.742 Maximum Single Source Range Length: 128 00:07:30.742 Maximum Copy Length: 128 00:07:30.742 Maximum Source Range Count: 128 00:07:30.742 NGUID/EUI64 Never Reused: No 00:07:30.742 Namespace Write Protected: No 00:07:30.742 Number of LBA Formats: 8 00:07:30.742 Current LBA Format: LBA Format #04 00:07:30.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.742 00:07:30.742 NVM Specific Namespace Data 00:07:30.742 =========================== 00:07:30.742 Logical Block Storage Tag Mask: 0 00:07:30.742 Protection Information Capabilities: 00:07:30.742 16b Guard Protection Information Storage Tag Support: No 00:07:30.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.742 Storage Tag Check Read Support: No 00:07:30.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.742 Namespace ID:3 00:07:30.742 Error Recovery Timeout: Unlimited 00:07:30.742 Command Set Identifier: NVM (00h) 00:07:30.742 Deallocate: Supported 00:07:30.742 Deallocated/Unwritten Error: Supported 00:07:30.742 Deallocated Read Value: All 0x00 00:07:30.742 Deallocate in Write Zeroes: Not Supported 00:07:30.742 Deallocated Guard Field: 0xFFFF 00:07:30.742 Flush: Supported 00:07:30.742 Reservation: Not Supported 00:07:30.742 Namespace Sharing Capabilities: Private 00:07:30.742 Size (in LBAs): 1048576 (4GiB) 00:07:30.742 Capacity (in LBAs): 1048576 (4GiB) 00:07:30.742 Utilization (in LBAs): 1048576 (4GiB) 00:07:30.742 Thin Provisioning: Not Supported 00:07:30.742 Per-NS Atomic Units: No 00:07:30.742 Maximum Single Source Range Length: 128 00:07:30.742 Maximum Copy Length: 128 00:07:30.742 Maximum Source Range Count: 128 00:07:30.742 NGUID/EUI64 Never Reused: No 00:07:30.742 Namespace Write Protected: No 00:07:30.742 Number of LBA Formats: 8 00:07:30.742 Current LBA Format: LBA Format #04 00:07:30.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:30.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:30.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:30.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:30.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:30.743 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:30.743 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:30.743 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:30.743 00:07:30.743 NVM Specific Namespace Data 00:07:30.743 =========================== 00:07:30.743 Logical Block Storage Tag Mask: 0 00:07:30.743 Protection Information Capabilities: 00:07:30.743 16b Guard Protection Information Storage Tag Support: No 00:07:30.743 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:30.743 Storage Tag Check Read Support: No 00:07:30.743 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:30.743 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:30.743 23:20:16 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:31.022 ===================================================== 00:07:31.022 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:31.022 ===================================================== 00:07:31.022 Controller Capabilities/Features 00:07:31.022 ================================ 00:07:31.022 Vendor ID: 1b36 00:07:31.022 Subsystem Vendor ID: 1af4 00:07:31.022 Serial Number: 12340 00:07:31.022 Model Number: QEMU NVMe Ctrl 00:07:31.022 Firmware Version: 8.0.0 00:07:31.022 Recommended Arb Burst: 6 00:07:31.022 IEEE OUI Identifier: 00 54 52 00:07:31.022 Multi-path I/O 00:07:31.022 May have multiple subsystem ports: No 00:07:31.022 May have multiple controllers: No 00:07:31.022 Associated with SR-IOV VF: No 00:07:31.022 Max Data Transfer Size: 524288 00:07:31.022 Max Number of Namespaces: 256 00:07:31.022 Max Number of I/O Queues: 64 00:07:31.022 NVMe Specification Version (VS): 1.4 00:07:31.022 NVMe Specification Version (Identify): 1.4 00:07:31.022 Maximum Queue Entries: 2048 00:07:31.022 Contiguous Queues Required: Yes 00:07:31.022 Arbitration Mechanisms Supported 00:07:31.022 Weighted Round Robin: Not Supported 00:07:31.022 Vendor Specific: Not Supported 00:07:31.022 Reset Timeout: 7500 ms 00:07:31.022 Doorbell Stride: 4 bytes 00:07:31.022 NVM Subsystem Reset: Not Supported 00:07:31.022 Command Sets Supported 00:07:31.022 NVM Command Set: Supported 00:07:31.022 Boot Partition: Not Supported 00:07:31.022 Memory Page Size Minimum: 4096 bytes 00:07:31.022 Memory Page Size Maximum: 65536 bytes 00:07:31.022 Persistent Memory Region: Not Supported 00:07:31.022 Optional Asynchronous Events Supported 00:07:31.022 Namespace Attribute Notices: Supported 00:07:31.022 Firmware Activation Notices: Not Supported 00:07:31.022 ANA Change Notices: Not Supported 00:07:31.022 PLE Aggregate Log Change Notices: Not Supported 00:07:31.022 LBA Status Info Alert Notices: Not Supported 00:07:31.022 EGE Aggregate Log Change Notices: Not Supported 00:07:31.022 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.022 Zone Descriptor Change Notices: Not Supported 00:07:31.022 Discovery Log Change Notices: Not Supported 00:07:31.022 Controller Attributes 00:07:31.022 128-bit Host Identifier: Not Supported 00:07:31.022 Non-Operational Permissive Mode: Not Supported 00:07:31.022 NVM Sets: Not Supported 00:07:31.022 Read Recovery Levels: Not Supported 00:07:31.023 Endurance Groups: Not Supported 00:07:31.023 Predictable Latency Mode: Not Supported 00:07:31.023 Traffic Based Keep ALive: Not Supported 00:07:31.023 Namespace Granularity: Not Supported 00:07:31.023 SQ Associations: Not Supported 00:07:31.023 UUID List: Not Supported 00:07:31.023 Multi-Domain Subsystem: Not Supported 00:07:31.023 Fixed Capacity Management: Not Supported 00:07:31.023 Variable Capacity Management: Not Supported 00:07:31.023 Delete Endurance Group: Not Supported 00:07:31.023 Delete NVM Set: Not Supported 00:07:31.023 Extended LBA Formats Supported: Supported 00:07:31.023 Flexible Data Placement Supported: Not Supported 00:07:31.023 00:07:31.023 Controller Memory Buffer Support 00:07:31.023 ================================ 00:07:31.023 Supported: No 00:07:31.023 00:07:31.023 Persistent Memory Region Support 00:07:31.023 ================================ 00:07:31.023 Supported: No 00:07:31.023 00:07:31.023 Admin Command Set Attributes 00:07:31.023 ============================ 00:07:31.023 Security Send/Receive: Not Supported 00:07:31.023 Format NVM: Supported 00:07:31.023 Firmware Activate/Download: Not Supported 00:07:31.023 Namespace Management: Supported 00:07:31.023 Device Self-Test: Not Supported 00:07:31.023 Directives: Supported 00:07:31.023 NVMe-MI: Not Supported 00:07:31.023 Virtualization Management: Not Supported 00:07:31.023 Doorbell Buffer Config: Supported 00:07:31.023 Get LBA Status Capability: Not Supported 00:07:31.023 Command & Feature Lockdown Capability: Not Supported 00:07:31.023 Abort Command Limit: 4 00:07:31.023 Async Event Request Limit: 4 00:07:31.023 Number of Firmware Slots: N/A 00:07:31.023 Firmware Slot 1 Read-Only: N/A 00:07:31.023 Firmware Activation Without Reset: N/A 00:07:31.023 Multiple Update Detection Support: N/A 00:07:31.023 Firmware Update Granularity: No Information Provided 00:07:31.023 Per-Namespace SMART Log: Yes 00:07:31.023 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.023 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:31.023 Command Effects Log Page: Supported 00:07:31.023 Get Log Page Extended Data: Supported 00:07:31.023 Telemetry Log Pages: Not Supported 00:07:31.023 Persistent Event Log Pages: Not Supported 00:07:31.023 Supported Log Pages Log Page: May Support 00:07:31.023 Commands Supported & Effects Log Page: Not Supported 00:07:31.023 Feature Identifiers & Effects Log Page:May Support 00:07:31.023 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.023 Data Area 4 for Telemetry Log: Not Supported 00:07:31.023 Error Log Page Entries Supported: 1 00:07:31.023 Keep Alive: Not Supported 00:07:31.023 00:07:31.023 NVM Command Set Attributes 00:07:31.023 ========================== 00:07:31.023 Submission Queue Entry Size 00:07:31.023 Max: 64 00:07:31.023 Min: 64 00:07:31.023 Completion Queue Entry Size 00:07:31.023 Max: 16 00:07:31.023 Min: 16 00:07:31.023 Number of Namespaces: 256 00:07:31.023 Compare Command: Supported 00:07:31.023 Write Uncorrectable Command: Not Supported 00:07:31.023 Dataset Management Command: Supported 00:07:31.023 Write Zeroes Command: Supported 00:07:31.023 Set Features Save Field: Supported 00:07:31.023 Reservations: Not Supported 00:07:31.023 Timestamp: Supported 00:07:31.023 Copy: Supported 00:07:31.023 Volatile Write Cache: Present 00:07:31.023 Atomic Write Unit (Normal): 1 00:07:31.023 Atomic Write Unit (PFail): 1 00:07:31.023 Atomic Compare & Write Unit: 1 00:07:31.023 Fused Compare & Write: Not Supported 00:07:31.023 Scatter-Gather List 00:07:31.023 SGL Command Set: Supported 00:07:31.023 SGL Keyed: Not Supported 00:07:31.023 SGL Bit Bucket Descriptor: Not Supported 00:07:31.023 SGL Metadata Pointer: Not Supported 00:07:31.023 Oversized SGL: Not Supported 00:07:31.023 SGL Metadata Address: Not Supported 00:07:31.023 SGL Offset: Not Supported 00:07:31.023 Transport SGL Data Block: Not Supported 00:07:31.023 Replay Protected Memory Block: Not Supported 00:07:31.023 00:07:31.023 Firmware Slot Information 00:07:31.023 ========================= 00:07:31.023 Active slot: 1 00:07:31.023 Slot 1 Firmware Revision: 1.0 00:07:31.023 00:07:31.023 00:07:31.023 Commands Supported and Effects 00:07:31.023 ============================== 00:07:31.023 Admin Commands 00:07:31.023 -------------- 00:07:31.023 Delete I/O Submission Queue (00h): Supported 00:07:31.023 Create I/O Submission Queue (01h): Supported 00:07:31.023 Get Log Page (02h): Supported 00:07:31.023 Delete I/O Completion Queue (04h): Supported 00:07:31.023 Create I/O Completion Queue (05h): Supported 00:07:31.023 Identify (06h): Supported 00:07:31.023 Abort (08h): Supported 00:07:31.023 Set Features (09h): Supported 00:07:31.023 Get Features (0Ah): Supported 00:07:31.023 Asynchronous Event Request (0Ch): Supported 00:07:31.023 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.023 Directive Send (19h): Supported 00:07:31.023 Directive Receive (1Ah): Supported 00:07:31.023 Virtualization Management (1Ch): Supported 00:07:31.023 Doorbell Buffer Config (7Ch): Supported 00:07:31.023 Format NVM (80h): Supported LBA-Change 00:07:31.023 I/O Commands 00:07:31.023 ------------ 00:07:31.023 Flush (00h): Supported LBA-Change 00:07:31.023 Write (01h): Supported LBA-Change 00:07:31.023 Read (02h): Supported 00:07:31.023 Compare (05h): Supported 00:07:31.023 Write Zeroes (08h): Supported LBA-Change 00:07:31.023 Dataset Management (09h): Supported LBA-Change 00:07:31.023 Unknown (0Ch): Supported 00:07:31.023 Unknown (12h): Supported 00:07:31.023 Copy (19h): Supported LBA-Change 00:07:31.023 Unknown (1Dh): Supported LBA-Change 00:07:31.023 00:07:31.023 Error Log 00:07:31.023 ========= 00:07:31.023 00:07:31.023 Arbitration 00:07:31.023 =========== 00:07:31.023 Arbitration Burst: no limit 00:07:31.023 00:07:31.023 Power Management 00:07:31.023 ================ 00:07:31.023 Number of Power States: 1 00:07:31.023 Current Power State: Power State #0 00:07:31.023 Power State #0: 00:07:31.023 Max Power: 25.00 W 00:07:31.023 Non-Operational State: Operational 00:07:31.023 Entry Latency: 16 microseconds 00:07:31.023 Exit Latency: 4 microseconds 00:07:31.023 Relative Read Throughput: 0 00:07:31.023 Relative Read Latency: 0 00:07:31.023 Relative Write Throughput: 0 00:07:31.023 Relative Write Latency: 0 00:07:31.023 Idle Power: Not Reported 00:07:31.023 Active Power: Not Reported 00:07:31.023 Non-Operational Permissive Mode: Not Supported 00:07:31.023 00:07:31.023 Health Information 00:07:31.023 ================== 00:07:31.023 Critical Warnings: 00:07:31.023 Available Spare Space: OK 00:07:31.023 Temperature: OK 00:07:31.023 Device Reliability: OK 00:07:31.023 Read Only: No 00:07:31.023 Volatile Memory Backup: OK 00:07:31.023 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.023 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.023 Available Spare: 0% 00:07:31.023 Available Spare Threshold: 0% 00:07:31.023 Life Percentage Used: 0% 00:07:31.023 Data Units Read: 704 00:07:31.023 Data Units Written: 632 00:07:31.023 Host Read Commands: 36720 00:07:31.023 Host Write Commands: 36506 00:07:31.023 Controller Busy Time: 0 minutes 00:07:31.023 Power Cycles: 0 00:07:31.023 Power On Hours: 0 hours 00:07:31.023 Unsafe Shutdowns: 0 00:07:31.023 Unrecoverable Media Errors: 0 00:07:31.023 Lifetime Error Log Entries: 0 00:07:31.023 Warning Temperature Time: 0 minutes 00:07:31.023 Critical Temperature Time: 0 minutes 00:07:31.023 00:07:31.023 Number of Queues 00:07:31.023 ================ 00:07:31.023 Number of I/O Submission Queues: 64 00:07:31.023 Number of I/O Completion Queues: 64 00:07:31.023 00:07:31.023 ZNS Specific Controller Data 00:07:31.023 ============================ 00:07:31.023 Zone Append Size Limit: 0 00:07:31.023 00:07:31.023 00:07:31.023 Active Namespaces 00:07:31.023 ================= 00:07:31.023 Namespace ID:1 00:07:31.023 Error Recovery Timeout: Unlimited 00:07:31.023 Command Set Identifier: NVM (00h) 00:07:31.023 Deallocate: Supported 00:07:31.023 Deallocated/Unwritten Error: Supported 00:07:31.023 Deallocated Read Value: All 0x00 00:07:31.023 Deallocate in Write Zeroes: Not Supported 00:07:31.023 Deallocated Guard Field: 0xFFFF 00:07:31.023 Flush: Supported 00:07:31.023 Reservation: Not Supported 00:07:31.023 Metadata Transferred as: Separate Metadata Buffer 00:07:31.023 Namespace Sharing Capabilities: Private 00:07:31.023 Size (in LBAs): 1548666 (5GiB) 00:07:31.023 Capacity (in LBAs): 1548666 (5GiB) 00:07:31.023 Utilization (in LBAs): 1548666 (5GiB) 00:07:31.023 Thin Provisioning: Not Supported 00:07:31.023 Per-NS Atomic Units: No 00:07:31.023 Maximum Single Source Range Length: 128 00:07:31.023 Maximum Copy Length: 128 00:07:31.023 Maximum Source Range Count: 128 00:07:31.023 NGUID/EUI64 Never Reused: No 00:07:31.023 Namespace Write Protected: No 00:07:31.023 Number of LBA Formats: 8 00:07:31.023 Current LBA Format: LBA Format #07 00:07:31.023 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.023 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.023 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.023 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.023 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.023 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.023 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.023 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.023 00:07:31.023 NVM Specific Namespace Data 00:07:31.023 =========================== 00:07:31.023 Logical Block Storage Tag Mask: 0 00:07:31.023 Protection Information Capabilities: 00:07:31.023 16b Guard Protection Information Storage Tag Support: No 00:07:31.023 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.023 Storage Tag Check Read Support: No 00:07:31.023 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.023 23:20:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.023 23:20:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:31.283 ===================================================== 00:07:31.283 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:31.283 ===================================================== 00:07:31.283 Controller Capabilities/Features 00:07:31.283 ================================ 00:07:31.283 Vendor ID: 1b36 00:07:31.283 Subsystem Vendor ID: 1af4 00:07:31.283 Serial Number: 12341 00:07:31.283 Model Number: QEMU NVMe Ctrl 00:07:31.283 Firmware Version: 8.0.0 00:07:31.283 Recommended Arb Burst: 6 00:07:31.283 IEEE OUI Identifier: 00 54 52 00:07:31.283 Multi-path I/O 00:07:31.283 May have multiple subsystem ports: No 00:07:31.283 May have multiple controllers: No 00:07:31.283 Associated with SR-IOV VF: No 00:07:31.283 Max Data Transfer Size: 524288 00:07:31.283 Max Number of Namespaces: 256 00:07:31.283 Max Number of I/O Queues: 64 00:07:31.283 NVMe Specification Version (VS): 1.4 00:07:31.283 NVMe Specification Version (Identify): 1.4 00:07:31.283 Maximum Queue Entries: 2048 00:07:31.283 Contiguous Queues Required: Yes 00:07:31.283 Arbitration Mechanisms Supported 00:07:31.283 Weighted Round Robin: Not Supported 00:07:31.283 Vendor Specific: Not Supported 00:07:31.283 Reset Timeout: 7500 ms 00:07:31.283 Doorbell Stride: 4 bytes 00:07:31.283 NVM Subsystem Reset: Not Supported 00:07:31.283 Command Sets Supported 00:07:31.283 NVM Command Set: Supported 00:07:31.283 Boot Partition: Not Supported 00:07:31.283 Memory Page Size Minimum: 4096 bytes 00:07:31.283 Memory Page Size Maximum: 65536 bytes 00:07:31.283 Persistent Memory Region: Not Supported 00:07:31.283 Optional Asynchronous Events Supported 00:07:31.283 Namespace Attribute Notices: Supported 00:07:31.283 Firmware Activation Notices: Not Supported 00:07:31.283 ANA Change Notices: Not Supported 00:07:31.283 PLE Aggregate Log Change Notices: Not Supported 00:07:31.283 LBA Status Info Alert Notices: Not Supported 00:07:31.283 EGE Aggregate Log Change Notices: Not Supported 00:07:31.283 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.283 Zone Descriptor Change Notices: Not Supported 00:07:31.283 Discovery Log Change Notices: Not Supported 00:07:31.283 Controller Attributes 00:07:31.283 128-bit Host Identifier: Not Supported 00:07:31.283 Non-Operational Permissive Mode: Not Supported 00:07:31.283 NVM Sets: Not Supported 00:07:31.283 Read Recovery Levels: Not Supported 00:07:31.283 Endurance Groups: Not Supported 00:07:31.283 Predictable Latency Mode: Not Supported 00:07:31.283 Traffic Based Keep ALive: Not Supported 00:07:31.283 Namespace Granularity: Not Supported 00:07:31.283 SQ Associations: Not Supported 00:07:31.283 UUID List: Not Supported 00:07:31.283 Multi-Domain Subsystem: Not Supported 00:07:31.283 Fixed Capacity Management: Not Supported 00:07:31.283 Variable Capacity Management: Not Supported 00:07:31.283 Delete Endurance Group: Not Supported 00:07:31.283 Delete NVM Set: Not Supported 00:07:31.283 Extended LBA Formats Supported: Supported 00:07:31.283 Flexible Data Placement Supported: Not Supported 00:07:31.283 00:07:31.283 Controller Memory Buffer Support 00:07:31.283 ================================ 00:07:31.283 Supported: No 00:07:31.283 00:07:31.283 Persistent Memory Region Support 00:07:31.283 ================================ 00:07:31.283 Supported: No 00:07:31.283 00:07:31.283 Admin Command Set Attributes 00:07:31.283 ============================ 00:07:31.283 Security Send/Receive: Not Supported 00:07:31.283 Format NVM: Supported 00:07:31.283 Firmware Activate/Download: Not Supported 00:07:31.283 Namespace Management: Supported 00:07:31.283 Device Self-Test: Not Supported 00:07:31.283 Directives: Supported 00:07:31.283 NVMe-MI: Not Supported 00:07:31.283 Virtualization Management: Not Supported 00:07:31.283 Doorbell Buffer Config: Supported 00:07:31.283 Get LBA Status Capability: Not Supported 00:07:31.283 Command & Feature Lockdown Capability: Not Supported 00:07:31.283 Abort Command Limit: 4 00:07:31.283 Async Event Request Limit: 4 00:07:31.283 Number of Firmware Slots: N/A 00:07:31.283 Firmware Slot 1 Read-Only: N/A 00:07:31.283 Firmware Activation Without Reset: N/A 00:07:31.283 Multiple Update Detection Support: N/A 00:07:31.283 Firmware Update Granularity: No Information Provided 00:07:31.283 Per-Namespace SMART Log: Yes 00:07:31.283 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.283 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:31.283 Command Effects Log Page: Supported 00:07:31.283 Get Log Page Extended Data: Supported 00:07:31.283 Telemetry Log Pages: Not Supported 00:07:31.283 Persistent Event Log Pages: Not Supported 00:07:31.283 Supported Log Pages Log Page: May Support 00:07:31.283 Commands Supported & Effects Log Page: Not Supported 00:07:31.283 Feature Identifiers & Effects Log Page:May Support 00:07:31.283 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.283 Data Area 4 for Telemetry Log: Not Supported 00:07:31.283 Error Log Page Entries Supported: 1 00:07:31.283 Keep Alive: Not Supported 00:07:31.283 00:07:31.283 NVM Command Set Attributes 00:07:31.283 ========================== 00:07:31.283 Submission Queue Entry Size 00:07:31.283 Max: 64 00:07:31.283 Min: 64 00:07:31.283 Completion Queue Entry Size 00:07:31.283 Max: 16 00:07:31.283 Min: 16 00:07:31.283 Number of Namespaces: 256 00:07:31.283 Compare Command: Supported 00:07:31.283 Write Uncorrectable Command: Not Supported 00:07:31.283 Dataset Management Command: Supported 00:07:31.283 Write Zeroes Command: Supported 00:07:31.283 Set Features Save Field: Supported 00:07:31.283 Reservations: Not Supported 00:07:31.283 Timestamp: Supported 00:07:31.284 Copy: Supported 00:07:31.284 Volatile Write Cache: Present 00:07:31.284 Atomic Write Unit (Normal): 1 00:07:31.284 Atomic Write Unit (PFail): 1 00:07:31.284 Atomic Compare & Write Unit: 1 00:07:31.284 Fused Compare & Write: Not Supported 00:07:31.284 Scatter-Gather List 00:07:31.284 SGL Command Set: Supported 00:07:31.284 SGL Keyed: Not Supported 00:07:31.284 SGL Bit Bucket Descriptor: Not Supported 00:07:31.284 SGL Metadata Pointer: Not Supported 00:07:31.284 Oversized SGL: Not Supported 00:07:31.284 SGL Metadata Address: Not Supported 00:07:31.284 SGL Offset: Not Supported 00:07:31.284 Transport SGL Data Block: Not Supported 00:07:31.284 Replay Protected Memory Block: Not Supported 00:07:31.284 00:07:31.284 Firmware Slot Information 00:07:31.284 ========================= 00:07:31.284 Active slot: 1 00:07:31.284 Slot 1 Firmware Revision: 1.0 00:07:31.284 00:07:31.284 00:07:31.284 Commands Supported and Effects 00:07:31.284 ============================== 00:07:31.284 Admin Commands 00:07:31.284 -------------- 00:07:31.284 Delete I/O Submission Queue (00h): Supported 00:07:31.284 Create I/O Submission Queue (01h): Supported 00:07:31.284 Get Log Page (02h): Supported 00:07:31.284 Delete I/O Completion Queue (04h): Supported 00:07:31.284 Create I/O Completion Queue (05h): Supported 00:07:31.284 Identify (06h): Supported 00:07:31.284 Abort (08h): Supported 00:07:31.284 Set Features (09h): Supported 00:07:31.284 Get Features (0Ah): Supported 00:07:31.284 Asynchronous Event Request (0Ch): Supported 00:07:31.284 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.284 Directive Send (19h): Supported 00:07:31.284 Directive Receive (1Ah): Supported 00:07:31.284 Virtualization Management (1Ch): Supported 00:07:31.284 Doorbell Buffer Config (7Ch): Supported 00:07:31.284 Format NVM (80h): Supported LBA-Change 00:07:31.284 I/O Commands 00:07:31.284 ------------ 00:07:31.284 Flush (00h): Supported LBA-Change 00:07:31.284 Write (01h): Supported LBA-Change 00:07:31.284 Read (02h): Supported 00:07:31.284 Compare (05h): Supported 00:07:31.284 Write Zeroes (08h): Supported LBA-Change 00:07:31.284 Dataset Management (09h): Supported LBA-Change 00:07:31.284 Unknown (0Ch): Supported 00:07:31.284 Unknown (12h): Supported 00:07:31.284 Copy (19h): Supported LBA-Change 00:07:31.284 Unknown (1Dh): Supported LBA-Change 00:07:31.284 00:07:31.284 Error Log 00:07:31.284 ========= 00:07:31.284 00:07:31.284 Arbitration 00:07:31.284 =========== 00:07:31.284 Arbitration Burst: no limit 00:07:31.284 00:07:31.284 Power Management 00:07:31.284 ================ 00:07:31.284 Number of Power States: 1 00:07:31.284 Current Power State: Power State #0 00:07:31.284 Power State #0: 00:07:31.284 Max Power: 25.00 W 00:07:31.284 Non-Operational State: Operational 00:07:31.284 Entry Latency: 16 microseconds 00:07:31.284 Exit Latency: 4 microseconds 00:07:31.284 Relative Read Throughput: 0 00:07:31.284 Relative Read Latency: 0 00:07:31.284 Relative Write Throughput: 0 00:07:31.284 Relative Write Latency: 0 00:07:31.284 Idle Power: Not Reported 00:07:31.284 Active Power: Not Reported 00:07:31.284 Non-Operational Permissive Mode: Not Supported 00:07:31.284 00:07:31.284 Health Information 00:07:31.284 ================== 00:07:31.284 Critical Warnings: 00:07:31.284 Available Spare Space: OK 00:07:31.284 Temperature: OK 00:07:31.284 Device Reliability: OK 00:07:31.284 Read Only: No 00:07:31.284 Volatile Memory Backup: OK 00:07:31.284 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.284 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.284 Available Spare: 0% 00:07:31.284 Available Spare Threshold: 0% 00:07:31.284 Life Percentage Used: 0% 00:07:31.284 Data Units Read: 1073 00:07:31.284 Data Units Written: 934 00:07:31.284 Host Read Commands: 53944 00:07:31.284 Host Write Commands: 52610 00:07:31.284 Controller Busy Time: 0 minutes 00:07:31.284 Power Cycles: 0 00:07:31.284 Power On Hours: 0 hours 00:07:31.284 Unsafe Shutdowns: 0 00:07:31.284 Unrecoverable Media Errors: 0 00:07:31.284 Lifetime Error Log Entries: 0 00:07:31.284 Warning Temperature Time: 0 minutes 00:07:31.284 Critical Temperature Time: 0 minutes 00:07:31.284 00:07:31.284 Number of Queues 00:07:31.284 ================ 00:07:31.284 Number of I/O Submission Queues: 64 00:07:31.284 Number of I/O Completion Queues: 64 00:07:31.284 00:07:31.284 ZNS Specific Controller Data 00:07:31.284 ============================ 00:07:31.284 Zone Append Size Limit: 0 00:07:31.284 00:07:31.284 00:07:31.284 Active Namespaces 00:07:31.284 ================= 00:07:31.284 Namespace ID:1 00:07:31.284 Error Recovery Timeout: Unlimited 00:07:31.284 Command Set Identifier: NVM (00h) 00:07:31.284 Deallocate: Supported 00:07:31.284 Deallocated/Unwritten Error: Supported 00:07:31.284 Deallocated Read Value: All 0x00 00:07:31.284 Deallocate in Write Zeroes: Not Supported 00:07:31.284 Deallocated Guard Field: 0xFFFF 00:07:31.284 Flush: Supported 00:07:31.284 Reservation: Not Supported 00:07:31.284 Namespace Sharing Capabilities: Private 00:07:31.284 Size (in LBAs): 1310720 (5GiB) 00:07:31.284 Capacity (in LBAs): 1310720 (5GiB) 00:07:31.284 Utilization (in LBAs): 1310720 (5GiB) 00:07:31.284 Thin Provisioning: Not Supported 00:07:31.284 Per-NS Atomic Units: No 00:07:31.284 Maximum Single Source Range Length: 128 00:07:31.284 Maximum Copy Length: 128 00:07:31.284 Maximum Source Range Count: 128 00:07:31.284 NGUID/EUI64 Never Reused: No 00:07:31.284 Namespace Write Protected: No 00:07:31.284 Number of LBA Formats: 8 00:07:31.284 Current LBA Format: LBA Format #04 00:07:31.284 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.284 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.284 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.284 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.284 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.284 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.284 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.284 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.284 00:07:31.284 NVM Specific Namespace Data 00:07:31.284 =========================== 00:07:31.284 Logical Block Storage Tag Mask: 0 00:07:31.284 Protection Information Capabilities: 00:07:31.284 16b Guard Protection Information Storage Tag Support: No 00:07:31.284 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.284 Storage Tag Check Read Support: No 00:07:31.284 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.284 23:20:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.284 23:20:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:31.284 ===================================================== 00:07:31.284 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:31.284 ===================================================== 00:07:31.284 Controller Capabilities/Features 00:07:31.284 ================================ 00:07:31.284 Vendor ID: 1b36 00:07:31.284 Subsystem Vendor ID: 1af4 00:07:31.284 Serial Number: 12342 00:07:31.284 Model Number: QEMU NVMe Ctrl 00:07:31.284 Firmware Version: 8.0.0 00:07:31.284 Recommended Arb Burst: 6 00:07:31.284 IEEE OUI Identifier: 00 54 52 00:07:31.284 Multi-path I/O 00:07:31.284 May have multiple subsystem ports: No 00:07:31.284 May have multiple controllers: No 00:07:31.284 Associated with SR-IOV VF: No 00:07:31.284 Max Data Transfer Size: 524288 00:07:31.284 Max Number of Namespaces: 256 00:07:31.284 Max Number of I/O Queues: 64 00:07:31.284 NVMe Specification Version (VS): 1.4 00:07:31.284 NVMe Specification Version (Identify): 1.4 00:07:31.284 Maximum Queue Entries: 2048 00:07:31.284 Contiguous Queues Required: Yes 00:07:31.284 Arbitration Mechanisms Supported 00:07:31.284 Weighted Round Robin: Not Supported 00:07:31.284 Vendor Specific: Not Supported 00:07:31.284 Reset Timeout: 7500 ms 00:07:31.284 Doorbell Stride: 4 bytes 00:07:31.284 NVM Subsystem Reset: Not Supported 00:07:31.284 Command Sets Supported 00:07:31.284 NVM Command Set: Supported 00:07:31.284 Boot Partition: Not Supported 00:07:31.285 Memory Page Size Minimum: 4096 bytes 00:07:31.285 Memory Page Size Maximum: 65536 bytes 00:07:31.285 Persistent Memory Region: Not Supported 00:07:31.285 Optional Asynchronous Events Supported 00:07:31.285 Namespace Attribute Notices: Supported 00:07:31.285 Firmware Activation Notices: Not Supported 00:07:31.285 ANA Change Notices: Not Supported 00:07:31.285 PLE Aggregate Log Change Notices: Not Supported 00:07:31.285 LBA Status Info Alert Notices: Not Supported 00:07:31.285 EGE Aggregate Log Change Notices: Not Supported 00:07:31.285 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.285 Zone Descriptor Change Notices: Not Supported 00:07:31.285 Discovery Log Change Notices: Not Supported 00:07:31.285 Controller Attributes 00:07:31.285 128-bit Host Identifier: Not Supported 00:07:31.285 Non-Operational Permissive Mode: Not Supported 00:07:31.285 NVM Sets: Not Supported 00:07:31.285 Read Recovery Levels: Not Supported 00:07:31.285 Endurance Groups: Not Supported 00:07:31.285 Predictable Latency Mode: Not Supported 00:07:31.285 Traffic Based Keep ALive: Not Supported 00:07:31.285 Namespace Granularity: Not Supported 00:07:31.285 SQ Associations: Not Supported 00:07:31.285 UUID List: Not Supported 00:07:31.285 Multi-Domain Subsystem: Not Supported 00:07:31.285 Fixed Capacity Management: Not Supported 00:07:31.285 Variable Capacity Management: Not Supported 00:07:31.285 Delete Endurance Group: Not Supported 00:07:31.285 Delete NVM Set: Not Supported 00:07:31.285 Extended LBA Formats Supported: Supported 00:07:31.285 Flexible Data Placement Supported: Not Supported 00:07:31.285 00:07:31.285 Controller Memory Buffer Support 00:07:31.285 ================================ 00:07:31.285 Supported: No 00:07:31.285 00:07:31.285 Persistent Memory Region Support 00:07:31.285 ================================ 00:07:31.285 Supported: No 00:07:31.285 00:07:31.285 Admin Command Set Attributes 00:07:31.285 ============================ 00:07:31.285 Security Send/Receive: Not Supported 00:07:31.285 Format NVM: Supported 00:07:31.285 Firmware Activate/Download: Not Supported 00:07:31.285 Namespace Management: Supported 00:07:31.285 Device Self-Test: Not Supported 00:07:31.285 Directives: Supported 00:07:31.285 NVMe-MI: Not Supported 00:07:31.285 Virtualization Management: Not Supported 00:07:31.285 Doorbell Buffer Config: Supported 00:07:31.285 Get LBA Status Capability: Not Supported 00:07:31.285 Command & Feature Lockdown Capability: Not Supported 00:07:31.285 Abort Command Limit: 4 00:07:31.285 Async Event Request Limit: 4 00:07:31.285 Number of Firmware Slots: N/A 00:07:31.285 Firmware Slot 1 Read-Only: N/A 00:07:31.285 Firmware Activation Without Reset: N/A 00:07:31.285 Multiple Update Detection Support: N/A 00:07:31.285 Firmware Update Granularity: No Information Provided 00:07:31.285 Per-Namespace SMART Log: Yes 00:07:31.285 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.285 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:31.285 Command Effects Log Page: Supported 00:07:31.285 Get Log Page Extended Data: Supported 00:07:31.285 Telemetry Log Pages: Not Supported 00:07:31.285 Persistent Event Log Pages: Not Supported 00:07:31.285 Supported Log Pages Log Page: May Support 00:07:31.285 Commands Supported & Effects Log Page: Not Supported 00:07:31.285 Feature Identifiers & Effects Log Page:May Support 00:07:31.285 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.285 Data Area 4 for Telemetry Log: Not Supported 00:07:31.285 Error Log Page Entries Supported: 1 00:07:31.285 Keep Alive: Not Supported 00:07:31.285 00:07:31.285 NVM Command Set Attributes 00:07:31.285 ========================== 00:07:31.285 Submission Queue Entry Size 00:07:31.285 Max: 64 00:07:31.285 Min: 64 00:07:31.285 Completion Queue Entry Size 00:07:31.285 Max: 16 00:07:31.285 Min: 16 00:07:31.285 Number of Namespaces: 256 00:07:31.285 Compare Command: Supported 00:07:31.285 Write Uncorrectable Command: Not Supported 00:07:31.285 Dataset Management Command: Supported 00:07:31.285 Write Zeroes Command: Supported 00:07:31.285 Set Features Save Field: Supported 00:07:31.285 Reservations: Not Supported 00:07:31.285 Timestamp: Supported 00:07:31.285 Copy: Supported 00:07:31.285 Volatile Write Cache: Present 00:07:31.285 Atomic Write Unit (Normal): 1 00:07:31.285 Atomic Write Unit (PFail): 1 00:07:31.285 Atomic Compare & Write Unit: 1 00:07:31.285 Fused Compare & Write: Not Supported 00:07:31.285 Scatter-Gather List 00:07:31.285 SGL Command Set: Supported 00:07:31.285 SGL Keyed: Not Supported 00:07:31.285 SGL Bit Bucket Descriptor: Not Supported 00:07:31.285 SGL Metadata Pointer: Not Supported 00:07:31.285 Oversized SGL: Not Supported 00:07:31.285 SGL Metadata Address: Not Supported 00:07:31.285 SGL Offset: Not Supported 00:07:31.285 Transport SGL Data Block: Not Supported 00:07:31.285 Replay Protected Memory Block: Not Supported 00:07:31.285 00:07:31.285 Firmware Slot Information 00:07:31.285 ========================= 00:07:31.285 Active slot: 1 00:07:31.285 Slot 1 Firmware Revision: 1.0 00:07:31.285 00:07:31.285 00:07:31.285 Commands Supported and Effects 00:07:31.285 ============================== 00:07:31.285 Admin Commands 00:07:31.285 -------------- 00:07:31.285 Delete I/O Submission Queue (00h): Supported 00:07:31.285 Create I/O Submission Queue (01h): Supported 00:07:31.285 Get Log Page (02h): Supported 00:07:31.285 Delete I/O Completion Queue (04h): Supported 00:07:31.285 Create I/O Completion Queue (05h): Supported 00:07:31.285 Identify (06h): Supported 00:07:31.285 Abort (08h): Supported 00:07:31.285 Set Features (09h): Supported 00:07:31.285 Get Features (0Ah): Supported 00:07:31.285 Asynchronous Event Request (0Ch): Supported 00:07:31.285 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.285 Directive Send (19h): Supported 00:07:31.285 Directive Receive (1Ah): Supported 00:07:31.285 Virtualization Management (1Ch): Supported 00:07:31.285 Doorbell Buffer Config (7Ch): Supported 00:07:31.285 Format NVM (80h): Supported LBA-Change 00:07:31.285 I/O Commands 00:07:31.285 ------------ 00:07:31.285 Flush (00h): Supported LBA-Change 00:07:31.285 Write (01h): Supported LBA-Change 00:07:31.285 Read (02h): Supported 00:07:31.285 Compare (05h): Supported 00:07:31.285 Write Zeroes (08h): Supported LBA-Change 00:07:31.285 Dataset Management (09h): Supported LBA-Change 00:07:31.285 Unknown (0Ch): Supported 00:07:31.285 Unknown (12h): Supported 00:07:31.285 Copy (19h): Supported LBA-Change 00:07:31.285 Unknown (1Dh): Supported LBA-Change 00:07:31.285 00:07:31.285 Error Log 00:07:31.285 ========= 00:07:31.285 00:07:31.285 Arbitration 00:07:31.285 =========== 00:07:31.285 Arbitration Burst: no limit 00:07:31.285 00:07:31.285 Power Management 00:07:31.285 ================ 00:07:31.285 Number of Power States: 1 00:07:31.285 Current Power State: Power State #0 00:07:31.285 Power State #0: 00:07:31.285 Max Power: 25.00 W 00:07:31.285 Non-Operational State: Operational 00:07:31.285 Entry Latency: 16 microseconds 00:07:31.285 Exit Latency: 4 microseconds 00:07:31.285 Relative Read Throughput: 0 00:07:31.285 Relative Read Latency: 0 00:07:31.285 Relative Write Throughput: 0 00:07:31.285 Relative Write Latency: 0 00:07:31.285 Idle Power: Not Reported 00:07:31.285 Active Power: Not Reported 00:07:31.285 Non-Operational Permissive Mode: Not Supported 00:07:31.285 00:07:31.285 Health Information 00:07:31.285 ================== 00:07:31.285 Critical Warnings: 00:07:31.285 Available Spare Space: OK 00:07:31.285 Temperature: OK 00:07:31.285 Device Reliability: OK 00:07:31.285 Read Only: No 00:07:31.285 Volatile Memory Backup: OK 00:07:31.285 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.285 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.285 Available Spare: 0% 00:07:31.285 Available Spare Threshold: 0% 00:07:31.285 Life Percentage Used: 0% 00:07:31.285 Data Units Read: 2249 00:07:31.285 Data Units Written: 2036 00:07:31.285 Host Read Commands: 111410 00:07:31.285 Host Write Commands: 109679 00:07:31.285 Controller Busy Time: 0 minutes 00:07:31.285 Power Cycles: 0 00:07:31.285 Power On Hours: 0 hours 00:07:31.285 Unsafe Shutdowns: 0 00:07:31.285 Unrecoverable Media Errors: 0 00:07:31.285 Lifetime Error Log Entries: 0 00:07:31.285 Warning Temperature Time: 0 minutes 00:07:31.285 Critical Temperature Time: 0 minutes 00:07:31.285 00:07:31.285 Number of Queues 00:07:31.285 ================ 00:07:31.285 Number of I/O Submission Queues: 64 00:07:31.285 Number of I/O Completion Queues: 64 00:07:31.285 00:07:31.285 ZNS Specific Controller Data 00:07:31.285 ============================ 00:07:31.285 Zone Append Size Limit: 0 00:07:31.285 00:07:31.285 00:07:31.286 Active Namespaces 00:07:31.286 ================= 00:07:31.286 Namespace ID:1 00:07:31.286 Error Recovery Timeout: Unlimited 00:07:31.286 Command Set Identifier: NVM (00h) 00:07:31.286 Deallocate: Supported 00:07:31.286 Deallocated/Unwritten Error: Supported 00:07:31.286 Deallocated Read Value: All 0x00 00:07:31.286 Deallocate in Write Zeroes: Not Supported 00:07:31.286 Deallocated Guard Field: 0xFFFF 00:07:31.286 Flush: Supported 00:07:31.286 Reservation: Not Supported 00:07:31.286 Namespace Sharing Capabilities: Private 00:07:31.286 Size (in LBAs): 1048576 (4GiB) 00:07:31.286 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.286 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.286 Thin Provisioning: Not Supported 00:07:31.286 Per-NS Atomic Units: No 00:07:31.286 Maximum Single Source Range Length: 128 00:07:31.286 Maximum Copy Length: 128 00:07:31.286 Maximum Source Range Count: 128 00:07:31.286 NGUID/EUI64 Never Reused: No 00:07:31.286 Namespace Write Protected: No 00:07:31.286 Number of LBA Formats: 8 00:07:31.286 Current LBA Format: LBA Format #04 00:07:31.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.286 00:07:31.286 NVM Specific Namespace Data 00:07:31.286 =========================== 00:07:31.286 Logical Block Storage Tag Mask: 0 00:07:31.286 Protection Information Capabilities: 00:07:31.286 16b Guard Protection Information Storage Tag Support: No 00:07:31.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.286 Storage Tag Check Read Support: No 00:07:31.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Namespace ID:2 00:07:31.286 Error Recovery Timeout: Unlimited 00:07:31.286 Command Set Identifier: NVM (00h) 00:07:31.286 Deallocate: Supported 00:07:31.286 Deallocated/Unwritten Error: Supported 00:07:31.286 Deallocated Read Value: All 0x00 00:07:31.286 Deallocate in Write Zeroes: Not Supported 00:07:31.286 Deallocated Guard Field: 0xFFFF 00:07:31.286 Flush: Supported 00:07:31.286 Reservation: Not Supported 00:07:31.286 Namespace Sharing Capabilities: Private 00:07:31.286 Size (in LBAs): 1048576 (4GiB) 00:07:31.286 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.286 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.286 Thin Provisioning: Not Supported 00:07:31.286 Per-NS Atomic Units: No 00:07:31.286 Maximum Single Source Range Length: 128 00:07:31.286 Maximum Copy Length: 128 00:07:31.286 Maximum Source Range Count: 128 00:07:31.286 NGUID/EUI64 Never Reused: No 00:07:31.286 Namespace Write Protected: No 00:07:31.286 Number of LBA Formats: 8 00:07:31.286 Current LBA Format: LBA Format #04 00:07:31.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.286 00:07:31.286 NVM Specific Namespace Data 00:07:31.286 =========================== 00:07:31.286 Logical Block Storage Tag Mask: 0 00:07:31.286 Protection Information Capabilities: 00:07:31.286 16b Guard Protection Information Storage Tag Support: No 00:07:31.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.286 Storage Tag Check Read Support: No 00:07:31.286 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.286 Namespace ID:3 00:07:31.286 Error Recovery Timeout: Unlimited 00:07:31.286 Command Set Identifier: NVM (00h) 00:07:31.286 Deallocate: Supported 00:07:31.286 Deallocated/Unwritten Error: Supported 00:07:31.286 Deallocated Read Value: All 0x00 00:07:31.286 Deallocate in Write Zeroes: Not Supported 00:07:31.286 Deallocated Guard Field: 0xFFFF 00:07:31.286 Flush: Supported 00:07:31.286 Reservation: Not Supported 00:07:31.286 Namespace Sharing Capabilities: Private 00:07:31.286 Size (in LBAs): 1048576 (4GiB) 00:07:31.286 Capacity (in LBAs): 1048576 (4GiB) 00:07:31.286 Utilization (in LBAs): 1048576 (4GiB) 00:07:31.286 Thin Provisioning: Not Supported 00:07:31.286 Per-NS Atomic Units: No 00:07:31.286 Maximum Single Source Range Length: 128 00:07:31.286 Maximum Copy Length: 128 00:07:31.286 Maximum Source Range Count: 128 00:07:31.286 NGUID/EUI64 Never Reused: No 00:07:31.286 Namespace Write Protected: No 00:07:31.286 Number of LBA Formats: 8 00:07:31.286 Current LBA Format: LBA Format #04 00:07:31.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.286 00:07:31.286 NVM Specific Namespace Data 00:07:31.286 =========================== 00:07:31.286 Logical Block Storage Tag Mask: 0 00:07:31.286 Protection Information Capabilities: 00:07:31.286 16b Guard Protection Information Storage Tag Support: No 00:07:31.286 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.545 Storage Tag Check Read Support: No 00:07:31.545 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.545 23:20:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:31.545 23:20:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:31.545 ===================================================== 00:07:31.545 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:31.545 ===================================================== 00:07:31.545 Controller Capabilities/Features 00:07:31.545 ================================ 00:07:31.545 Vendor ID: 1b36 00:07:31.545 Subsystem Vendor ID: 1af4 00:07:31.545 Serial Number: 12343 00:07:31.545 Model Number: QEMU NVMe Ctrl 00:07:31.545 Firmware Version: 8.0.0 00:07:31.545 Recommended Arb Burst: 6 00:07:31.545 IEEE OUI Identifier: 00 54 52 00:07:31.545 Multi-path I/O 00:07:31.545 May have multiple subsystem ports: No 00:07:31.545 May have multiple controllers: Yes 00:07:31.545 Associated with SR-IOV VF: No 00:07:31.545 Max Data Transfer Size: 524288 00:07:31.545 Max Number of Namespaces: 256 00:07:31.545 Max Number of I/O Queues: 64 00:07:31.545 NVMe Specification Version (VS): 1.4 00:07:31.545 NVMe Specification Version (Identify): 1.4 00:07:31.545 Maximum Queue Entries: 2048 00:07:31.545 Contiguous Queues Required: Yes 00:07:31.545 Arbitration Mechanisms Supported 00:07:31.545 Weighted Round Robin: Not Supported 00:07:31.545 Vendor Specific: Not Supported 00:07:31.545 Reset Timeout: 7500 ms 00:07:31.545 Doorbell Stride: 4 bytes 00:07:31.545 NVM Subsystem Reset: Not Supported 00:07:31.545 Command Sets Supported 00:07:31.545 NVM Command Set: Supported 00:07:31.545 Boot Partition: Not Supported 00:07:31.545 Memory Page Size Minimum: 4096 bytes 00:07:31.545 Memory Page Size Maximum: 65536 bytes 00:07:31.545 Persistent Memory Region: Not Supported 00:07:31.545 Optional Asynchronous Events Supported 00:07:31.545 Namespace Attribute Notices: Supported 00:07:31.545 Firmware Activation Notices: Not Supported 00:07:31.545 ANA Change Notices: Not Supported 00:07:31.545 PLE Aggregate Log Change Notices: Not Supported 00:07:31.545 LBA Status Info Alert Notices: Not Supported 00:07:31.545 EGE Aggregate Log Change Notices: Not Supported 00:07:31.545 Normal NVM Subsystem Shutdown event: Not Supported 00:07:31.545 Zone Descriptor Change Notices: Not Supported 00:07:31.545 Discovery Log Change Notices: Not Supported 00:07:31.545 Controller Attributes 00:07:31.545 128-bit Host Identifier: Not Supported 00:07:31.545 Non-Operational Permissive Mode: Not Supported 00:07:31.545 NVM Sets: Not Supported 00:07:31.545 Read Recovery Levels: Not Supported 00:07:31.545 Endurance Groups: Supported 00:07:31.545 Predictable Latency Mode: Not Supported 00:07:31.545 Traffic Based Keep ALive: Not Supported 00:07:31.545 Namespace Granularity: Not Supported 00:07:31.545 SQ Associations: Not Supported 00:07:31.545 UUID List: Not Supported 00:07:31.545 Multi-Domain Subsystem: Not Supported 00:07:31.545 Fixed Capacity Management: Not Supported 00:07:31.545 Variable Capacity Management: Not Supported 00:07:31.545 Delete Endurance Group: Not Supported 00:07:31.545 Delete NVM Set: Not Supported 00:07:31.545 Extended LBA Formats Supported: Supported 00:07:31.545 Flexible Data Placement Supported: Supported 00:07:31.545 00:07:31.545 Controller Memory Buffer Support 00:07:31.545 ================================ 00:07:31.545 Supported: No 00:07:31.545 00:07:31.545 Persistent Memory Region Support 00:07:31.545 ================================ 00:07:31.545 Supported: No 00:07:31.545 00:07:31.545 Admin Command Set Attributes 00:07:31.545 ============================ 00:07:31.545 Security Send/Receive: Not Supported 00:07:31.545 Format NVM: Supported 00:07:31.545 Firmware Activate/Download: Not Supported 00:07:31.545 Namespace Management: Supported 00:07:31.545 Device Self-Test: Not Supported 00:07:31.545 Directives: Supported 00:07:31.545 NVMe-MI: Not Supported 00:07:31.545 Virtualization Management: Not Supported 00:07:31.545 Doorbell Buffer Config: Supported 00:07:31.545 Get LBA Status Capability: Not Supported 00:07:31.545 Command & Feature Lockdown Capability: Not Supported 00:07:31.546 Abort Command Limit: 4 00:07:31.546 Async Event Request Limit: 4 00:07:31.546 Number of Firmware Slots: N/A 00:07:31.546 Firmware Slot 1 Read-Only: N/A 00:07:31.546 Firmware Activation Without Reset: N/A 00:07:31.546 Multiple Update Detection Support: N/A 00:07:31.546 Firmware Update Granularity: No Information Provided 00:07:31.546 Per-Namespace SMART Log: Yes 00:07:31.546 Asymmetric Namespace Access Log Page: Not Supported 00:07:31.546 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:31.546 Command Effects Log Page: Supported 00:07:31.546 Get Log Page Extended Data: Supported 00:07:31.546 Telemetry Log Pages: Not Supported 00:07:31.546 Persistent Event Log Pages: Not Supported 00:07:31.546 Supported Log Pages Log Page: May Support 00:07:31.546 Commands Supported & Effects Log Page: Not Supported 00:07:31.546 Feature Identifiers & Effects Log Page:May Support 00:07:31.546 NVMe-MI Commands & Effects Log Page: May Support 00:07:31.546 Data Area 4 for Telemetry Log: Not Supported 00:07:31.546 Error Log Page Entries Supported: 1 00:07:31.546 Keep Alive: Not Supported 00:07:31.546 00:07:31.546 NVM Command Set Attributes 00:07:31.546 ========================== 00:07:31.546 Submission Queue Entry Size 00:07:31.546 Max: 64 00:07:31.546 Min: 64 00:07:31.546 Completion Queue Entry Size 00:07:31.546 Max: 16 00:07:31.546 Min: 16 00:07:31.546 Number of Namespaces: 256 00:07:31.546 Compare Command: Supported 00:07:31.546 Write Uncorrectable Command: Not Supported 00:07:31.546 Dataset Management Command: Supported 00:07:31.546 Write Zeroes Command: Supported 00:07:31.546 Set Features Save Field: Supported 00:07:31.546 Reservations: Not Supported 00:07:31.546 Timestamp: Supported 00:07:31.546 Copy: Supported 00:07:31.546 Volatile Write Cache: Present 00:07:31.546 Atomic Write Unit (Normal): 1 00:07:31.546 Atomic Write Unit (PFail): 1 00:07:31.546 Atomic Compare & Write Unit: 1 00:07:31.546 Fused Compare & Write: Not Supported 00:07:31.546 Scatter-Gather List 00:07:31.546 SGL Command Set: Supported 00:07:31.546 SGL Keyed: Not Supported 00:07:31.546 SGL Bit Bucket Descriptor: Not Supported 00:07:31.546 SGL Metadata Pointer: Not Supported 00:07:31.546 Oversized SGL: Not Supported 00:07:31.546 SGL Metadata Address: Not Supported 00:07:31.546 SGL Offset: Not Supported 00:07:31.546 Transport SGL Data Block: Not Supported 00:07:31.546 Replay Protected Memory Block: Not Supported 00:07:31.546 00:07:31.546 Firmware Slot Information 00:07:31.546 ========================= 00:07:31.546 Active slot: 1 00:07:31.546 Slot 1 Firmware Revision: 1.0 00:07:31.546 00:07:31.546 00:07:31.546 Commands Supported and Effects 00:07:31.546 ============================== 00:07:31.546 Admin Commands 00:07:31.546 -------------- 00:07:31.546 Delete I/O Submission Queue (00h): Supported 00:07:31.546 Create I/O Submission Queue (01h): Supported 00:07:31.546 Get Log Page (02h): Supported 00:07:31.546 Delete I/O Completion Queue (04h): Supported 00:07:31.546 Create I/O Completion Queue (05h): Supported 00:07:31.546 Identify (06h): Supported 00:07:31.546 Abort (08h): Supported 00:07:31.546 Set Features (09h): Supported 00:07:31.546 Get Features (0Ah): Supported 00:07:31.546 Asynchronous Event Request (0Ch): Supported 00:07:31.546 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:31.546 Directive Send (19h): Supported 00:07:31.546 Directive Receive (1Ah): Supported 00:07:31.546 Virtualization Management (1Ch): Supported 00:07:31.546 Doorbell Buffer Config (7Ch): Supported 00:07:31.546 Format NVM (80h): Supported LBA-Change 00:07:31.546 I/O Commands 00:07:31.546 ------------ 00:07:31.546 Flush (00h): Supported LBA-Change 00:07:31.546 Write (01h): Supported LBA-Change 00:07:31.546 Read (02h): Supported 00:07:31.546 Compare (05h): Supported 00:07:31.546 Write Zeroes (08h): Supported LBA-Change 00:07:31.546 Dataset Management (09h): Supported LBA-Change 00:07:31.546 Unknown (0Ch): Supported 00:07:31.546 Unknown (12h): Supported 00:07:31.546 Copy (19h): Supported LBA-Change 00:07:31.546 Unknown (1Dh): Supported LBA-Change 00:07:31.546 00:07:31.546 Error Log 00:07:31.546 ========= 00:07:31.546 00:07:31.546 Arbitration 00:07:31.546 =========== 00:07:31.546 Arbitration Burst: no limit 00:07:31.546 00:07:31.546 Power Management 00:07:31.546 ================ 00:07:31.546 Number of Power States: 1 00:07:31.546 Current Power State: Power State #0 00:07:31.546 Power State #0: 00:07:31.546 Max Power: 25.00 W 00:07:31.546 Non-Operational State: Operational 00:07:31.546 Entry Latency: 16 microseconds 00:07:31.546 Exit Latency: 4 microseconds 00:07:31.546 Relative Read Throughput: 0 00:07:31.546 Relative Read Latency: 0 00:07:31.546 Relative Write Throughput: 0 00:07:31.546 Relative Write Latency: 0 00:07:31.546 Idle Power: Not Reported 00:07:31.546 Active Power: Not Reported 00:07:31.546 Non-Operational Permissive Mode: Not Supported 00:07:31.546 00:07:31.546 Health Information 00:07:31.546 ================== 00:07:31.546 Critical Warnings: 00:07:31.546 Available Spare Space: OK 00:07:31.546 Temperature: OK 00:07:31.546 Device Reliability: OK 00:07:31.546 Read Only: No 00:07:31.546 Volatile Memory Backup: OK 00:07:31.546 Current Temperature: 323 Kelvin (50 Celsius) 00:07:31.546 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:31.546 Available Spare: 0% 00:07:31.546 Available Spare Threshold: 0% 00:07:31.546 Life Percentage Used: 0% 00:07:31.546 Data Units Read: 853 00:07:31.546 Data Units Written: 783 00:07:31.546 Host Read Commands: 38125 00:07:31.546 Host Write Commands: 37548 00:07:31.546 Controller Busy Time: 0 minutes 00:07:31.546 Power Cycles: 0 00:07:31.546 Power On Hours: 0 hours 00:07:31.546 Unsafe Shutdowns: 0 00:07:31.546 Unrecoverable Media Errors: 0 00:07:31.546 Lifetime Error Log Entries: 0 00:07:31.546 Warning Temperature Time: 0 minutes 00:07:31.546 Critical Temperature Time: 0 minutes 00:07:31.546 00:07:31.546 Number of Queues 00:07:31.546 ================ 00:07:31.546 Number of I/O Submission Queues: 64 00:07:31.546 Number of I/O Completion Queues: 64 00:07:31.546 00:07:31.546 ZNS Specific Controller Data 00:07:31.546 ============================ 00:07:31.546 Zone Append Size Limit: 0 00:07:31.546 00:07:31.546 00:07:31.546 Active Namespaces 00:07:31.546 ================= 00:07:31.546 Namespace ID:1 00:07:31.546 Error Recovery Timeout: Unlimited 00:07:31.546 Command Set Identifier: NVM (00h) 00:07:31.546 Deallocate: Supported 00:07:31.546 Deallocated/Unwritten Error: Supported 00:07:31.546 Deallocated Read Value: All 0x00 00:07:31.546 Deallocate in Write Zeroes: Not Supported 00:07:31.546 Deallocated Guard Field: 0xFFFF 00:07:31.546 Flush: Supported 00:07:31.546 Reservation: Not Supported 00:07:31.546 Namespace Sharing Capabilities: Multiple Controllers 00:07:31.546 Size (in LBAs): 262144 (1GiB) 00:07:31.546 Capacity (in LBAs): 262144 (1GiB) 00:07:31.546 Utilization (in LBAs): 262144 (1GiB) 00:07:31.546 Thin Provisioning: Not Supported 00:07:31.546 Per-NS Atomic Units: No 00:07:31.546 Maximum Single Source Range Length: 128 00:07:31.546 Maximum Copy Length: 128 00:07:31.546 Maximum Source Range Count: 128 00:07:31.546 NGUID/EUI64 Never Reused: No 00:07:31.546 Namespace Write Protected: No 00:07:31.546 Endurance group ID: 1 00:07:31.546 Number of LBA Formats: 8 00:07:31.546 Current LBA Format: LBA Format #04 00:07:31.546 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:31.546 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:31.546 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:31.546 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:31.546 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:31.546 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:31.546 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:31.546 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:31.546 00:07:31.546 Get Feature FDP: 00:07:31.546 ================ 00:07:31.546 Enabled: Yes 00:07:31.546 FDP configuration index: 0 00:07:31.546 00:07:31.546 FDP configurations log page 00:07:31.546 =========================== 00:07:31.546 Number of FDP configurations: 1 00:07:31.546 Version: 0 00:07:31.546 Size: 112 00:07:31.546 FDP Configuration Descriptor: 0 00:07:31.546 Descriptor Size: 96 00:07:31.546 Reclaim Group Identifier format: 2 00:07:31.546 FDP Volatile Write Cache: Not Present 00:07:31.547 FDP Configuration: Valid 00:07:31.547 Vendor Specific Size: 0 00:07:31.547 Number of Reclaim Groups: 2 00:07:31.547 Number of Recalim Unit Handles: 8 00:07:31.547 Max Placement Identifiers: 128 00:07:31.547 Number of Namespaces Suppprted: 256 00:07:31.547 Reclaim unit Nominal Size: 6000000 bytes 00:07:31.547 Estimated Reclaim Unit Time Limit: Not Reported 00:07:31.547 RUH Desc #000: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #001: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #002: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #003: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #004: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #005: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #006: RUH Type: Initially Isolated 00:07:31.547 RUH Desc #007: RUH Type: Initially Isolated 00:07:31.547 00:07:31.547 FDP reclaim unit handle usage log page 00:07:31.547 ====================================== 00:07:31.547 Number of Reclaim Unit Handles: 8 00:07:31.547 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:31.547 RUH Usage Desc #001: RUH Attributes: Unused 00:07:31.547 RUH Usage Desc #002: RUH Attributes: Unused 00:07:31.547 RUH Usage Desc #003: RUH Attributes: Unused 00:07:31.547 RUH Usage Desc #004: RUH Attributes: Unused 00:07:31.547 RUH Usage Desc #005: RUH Attributes: Unused 00:07:31.547 RUH Usage Desc #006: RUH Attributes: Unused 00:07:31.547 RUH Usage Desc #007: RUH Attributes: Unused 00:07:31.547 00:07:31.547 FDP statistics log page 00:07:31.547 ======================= 00:07:31.547 Host bytes with metadata written: 481140736 00:07:31.547 Media bytes with metadata written: 481193984 00:07:31.547 Media bytes erased: 0 00:07:31.547 00:07:31.547 FDP events log page 00:07:31.547 =================== 00:07:31.547 Number of FDP events: 0 00:07:31.547 00:07:31.547 NVM Specific Namespace Data 00:07:31.547 =========================== 00:07:31.547 Logical Block Storage Tag Mask: 0 00:07:31.547 Protection Information Capabilities: 00:07:31.547 16b Guard Protection Information Storage Tag Support: No 00:07:31.547 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:31.547 Storage Tag Check Read Support: No 00:07:31.547 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:31.547 ************************************ 00:07:31.547 END TEST nvme_identify 00:07:31.547 ************************************ 00:07:31.547 00:07:31.547 real 0m1.092s 00:07:31.547 user 0m0.415s 00:07:31.547 sys 0m0.476s 00:07:31.547 23:20:17 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.547 23:20:17 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:31.805 23:20:17 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:31.805 23:20:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.805 23:20:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.805 23:20:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.805 ************************************ 00:07:31.805 START TEST nvme_perf 00:07:31.805 ************************************ 00:07:31.805 23:20:17 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:31.805 23:20:17 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:33.190 Initializing NVMe Controllers 00:07:33.190 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:33.190 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:33.190 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:33.190 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:33.190 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:33.190 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:33.190 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:33.190 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:33.190 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:33.190 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:33.190 Initialization complete. Launching workers. 00:07:33.190 ======================================================== 00:07:33.190 Latency(us) 00:07:33.190 Device Information : IOPS MiB/s Average min max 00:07:33.190 PCIE (0000:00:10.0) NSID 1 from core 0: 19871.83 232.87 6442.45 4473.59 19958.85 00:07:33.190 PCIE (0000:00:11.0) NSID 1 from core 0: 19871.83 232.87 6437.71 4346.09 19177.71 00:07:33.190 PCIE (0000:00:13.0) NSID 1 from core 0: 19871.83 232.87 6432.44 3776.69 19115.04 00:07:33.190 PCIE (0000:00:12.0) NSID 1 from core 0: 19871.83 232.87 6426.95 3583.39 18518.94 00:07:33.190 PCIE (0000:00:12.0) NSID 2 from core 0: 19871.83 232.87 6421.48 3340.27 17910.37 00:07:33.190 PCIE (0000:00:12.0) NSID 3 from core 0: 19871.83 232.87 6416.07 3205.65 17368.65 00:07:33.190 ======================================================== 00:07:33.190 Total : 119230.97 1397.24 6429.52 3205.65 19958.85 00:07:33.190 00:07:33.191 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:33.191 ================================================================================= 00:07:33.191 1.00000% : 5671.385us 00:07:33.191 10.00000% : 5822.622us 00:07:33.191 25.00000% : 5999.065us 00:07:33.191 50.00000% : 6301.538us 00:07:33.191 75.00000% : 6604.012us 00:07:33.191 90.00000% : 6805.662us 00:07:33.191 95.00000% : 7713.083us 00:07:33.191 98.00000% : 9376.689us 00:07:33.191 99.00000% : 10485.760us 00:07:33.191 99.50000% : 14115.446us 00:07:33.191 99.90000% : 19660.800us 00:07:33.191 99.99000% : 19963.274us 00:07:33.191 99.99900% : 19963.274us 00:07:33.191 99.99990% : 19963.274us 00:07:33.191 99.99999% : 19963.274us 00:07:33.191 00:07:33.191 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:33.191 ================================================================================= 00:07:33.191 1.00000% : 5747.003us 00:07:33.191 10.00000% : 5898.240us 00:07:33.191 25.00000% : 6049.477us 00:07:33.191 50.00000% : 6276.332us 00:07:33.191 75.00000% : 6553.600us 00:07:33.191 90.00000% : 6755.249us 00:07:33.191 95.00000% : 7662.671us 00:07:33.191 98.00000% : 9679.163us 00:07:33.191 99.00000% : 10536.172us 00:07:33.191 99.50000% : 14115.446us 00:07:33.191 99.90000% : 18955.028us 00:07:33.191 99.99000% : 19257.502us 00:07:33.191 99.99900% : 19257.502us 00:07:33.191 99.99990% : 19257.502us 00:07:33.191 99.99999% : 19257.502us 00:07:33.191 00:07:33.191 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:33.191 ================================================================================= 00:07:33.191 1.00000% : 5747.003us 00:07:33.191 10.00000% : 5898.240us 00:07:33.191 25.00000% : 6049.477us 00:07:33.191 50.00000% : 6276.332us 00:07:33.191 75.00000% : 6553.600us 00:07:33.191 90.00000% : 6755.249us 00:07:33.191 95.00000% : 7511.434us 00:07:33.191 98.00000% : 9628.751us 00:07:33.191 99.00000% : 10536.172us 00:07:33.191 99.50000% : 14115.446us 00:07:33.191 99.90000% : 18854.203us 00:07:33.191 99.99000% : 19156.677us 00:07:33.191 99.99900% : 19156.677us 00:07:33.191 99.99990% : 19156.677us 00:07:33.191 99.99999% : 19156.677us 00:07:33.191 00:07:33.191 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:33.191 ================================================================================= 00:07:33.191 1.00000% : 5721.797us 00:07:33.191 10.00000% : 5898.240us 00:07:33.191 25.00000% : 6049.477us 00:07:33.191 50.00000% : 6276.332us 00:07:33.191 75.00000% : 6553.600us 00:07:33.191 90.00000% : 6755.249us 00:07:33.191 95.00000% : 7461.022us 00:07:33.191 98.00000% : 9578.338us 00:07:33.191 99.00000% : 10636.997us 00:07:33.191 99.50000% : 13510.498us 00:07:33.191 99.90000% : 18249.255us 00:07:33.191 99.99000% : 18551.729us 00:07:33.191 99.99900% : 18551.729us 00:07:33.191 99.99990% : 18551.729us 00:07:33.191 99.99999% : 18551.729us 00:07:33.191 00:07:33.191 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:33.191 ================================================================================= 00:07:33.191 1.00000% : 5721.797us 00:07:33.192 10.00000% : 5898.240us 00:07:33.192 25.00000% : 6049.477us 00:07:33.192 50.00000% : 6276.332us 00:07:33.192 75.00000% : 6553.600us 00:07:33.192 90.00000% : 6704.837us 00:07:33.192 95.00000% : 7511.434us 00:07:33.192 98.00000% : 9578.338us 00:07:33.192 99.00000% : 10586.585us 00:07:33.192 99.50000% : 13006.375us 00:07:33.192 99.90000% : 17644.308us 00:07:33.192 99.99000% : 17946.782us 00:07:33.192 99.99900% : 17946.782us 00:07:33.192 99.99990% : 17946.782us 00:07:33.192 99.99999% : 17946.782us 00:07:33.192 00:07:33.192 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:33.192 ================================================================================= 00:07:33.192 1.00000% : 5721.797us 00:07:33.192 10.00000% : 5898.240us 00:07:33.192 25.00000% : 6049.477us 00:07:33.192 50.00000% : 6276.332us 00:07:33.192 75.00000% : 6553.600us 00:07:33.192 90.00000% : 6755.249us 00:07:33.192 95.00000% : 7511.434us 00:07:33.192 98.00000% : 9578.338us 00:07:33.192 99.00000% : 10435.348us 00:07:33.192 99.50000% : 12502.252us 00:07:33.192 99.90000% : 17140.185us 00:07:33.192 99.99000% : 17442.658us 00:07:33.192 99.99900% : 17442.658us 00:07:33.192 99.99990% : 17442.658us 00:07:33.192 99.99999% : 17442.658us 00:07:33.192 00:07:33.192 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:33.192 ============================================================================== 00:07:33.192 Range in us Cumulative IO count 00:07:33.192 4461.489 - 4486.695: 0.0151% ( 3) 00:07:33.192 4486.695 - 4511.902: 0.0251% ( 2) 00:07:33.192 4511.902 - 4537.108: 0.0301% ( 1) 00:07:33.192 4537.108 - 4562.314: 0.0452% ( 3) 00:07:33.192 4562.314 - 4587.520: 0.0502% ( 1) 00:07:33.193 4587.520 - 4612.726: 0.0653% ( 3) 00:07:33.193 4612.726 - 4637.932: 0.0754% ( 2) 00:07:33.193 4637.932 - 4663.138: 0.0854% ( 2) 00:07:33.193 4663.138 - 4688.345: 0.0955% ( 2) 00:07:33.193 4688.345 - 4713.551: 0.1005% ( 1) 00:07:33.193 4713.551 - 4738.757: 0.1156% ( 3) 00:07:33.193 4738.757 - 4763.963: 0.1256% ( 2) 00:07:33.193 4763.963 - 4789.169: 0.1306% ( 1) 00:07:33.193 4789.169 - 4814.375: 0.1457% ( 3) 00:07:33.193 4814.375 - 4839.582: 0.1507% ( 1) 00:07:33.193 4839.582 - 4864.788: 0.1608% ( 2) 00:07:33.193 4864.788 - 4889.994: 0.1708% ( 2) 00:07:33.193 4889.994 - 4915.200: 0.1809% ( 2) 00:07:33.193 4915.200 - 4940.406: 0.1909% ( 2) 00:07:33.193 4940.406 - 4965.612: 0.2010% ( 2) 00:07:33.193 4965.612 - 4990.818: 0.2110% ( 2) 00:07:33.193 4990.818 - 5016.025: 0.2160% ( 1) 00:07:33.193 5016.025 - 5041.231: 0.2311% ( 3) 00:07:33.193 5041.231 - 5066.437: 0.2412% ( 2) 00:07:33.193 5066.437 - 5091.643: 0.2512% ( 2) 00:07:33.193 5091.643 - 5116.849: 0.2562% ( 1) 00:07:33.193 5116.849 - 5142.055: 0.2663% ( 2) 00:07:33.193 5142.055 - 5167.262: 0.2763% ( 2) 00:07:33.193 5167.262 - 5192.468: 0.2864% ( 2) 00:07:33.193 5192.468 - 5217.674: 0.2964% ( 2) 00:07:33.193 5217.674 - 5242.880: 0.3014% ( 1) 00:07:33.193 5242.880 - 5268.086: 0.3165% ( 3) 00:07:33.193 5268.086 - 5293.292: 0.3215% ( 1) 00:07:33.193 5520.148 - 5545.354: 0.3467% ( 5) 00:07:33.193 5545.354 - 5570.560: 0.3617% ( 3) 00:07:33.194 5570.560 - 5595.766: 0.4170% ( 11) 00:07:33.194 5595.766 - 5620.972: 0.5376% ( 24) 00:07:33.194 5620.972 - 5646.178: 0.8641% ( 65) 00:07:33.194 5646.178 - 5671.385: 1.4118% ( 109) 00:07:33.194 5671.385 - 5696.591: 2.0951% ( 136) 00:07:33.194 5696.591 - 5721.797: 3.2305% ( 226) 00:07:33.194 5721.797 - 5747.003: 4.6674% ( 286) 00:07:33.194 5747.003 - 5772.209: 6.4208% ( 349) 00:07:33.194 5772.209 - 5797.415: 8.2797% ( 370) 00:07:33.194 5797.415 - 5822.622: 10.4250% ( 427) 00:07:33.194 5822.622 - 5847.828: 12.4749% ( 408) 00:07:33.194 5847.828 - 5873.034: 14.7106% ( 445) 00:07:33.194 5873.034 - 5898.240: 16.9011% ( 436) 00:07:33.194 5898.240 - 5923.446: 19.1268% ( 443) 00:07:33.194 5923.446 - 5948.652: 21.2922% ( 431) 00:07:33.194 5948.652 - 5973.858: 23.4727% ( 434) 00:07:33.194 5973.858 - 5999.065: 25.7184% ( 447) 00:07:33.194 5999.065 - 6024.271: 27.7984% ( 414) 00:07:33.194 6024.271 - 6049.477: 30.0945% ( 457) 00:07:33.194 6049.477 - 6074.683: 32.1845% ( 416) 00:07:33.194 6074.683 - 6099.889: 34.5006% ( 461) 00:07:33.194 6099.889 - 6125.095: 36.5906% ( 416) 00:07:33.194 6125.095 - 6150.302: 38.9620% ( 472) 00:07:33.194 6150.302 - 6175.508: 41.0119% ( 408) 00:07:33.194 6175.508 - 6200.714: 43.3832% ( 472) 00:07:33.194 6200.714 - 6225.920: 45.4783% ( 417) 00:07:33.194 6225.920 - 6251.126: 47.7743% ( 457) 00:07:33.194 6251.126 - 6276.332: 49.9498% ( 433) 00:07:33.194 6276.332 - 6301.538: 52.1302% ( 434) 00:07:33.194 6301.538 - 6326.745: 54.3861% ( 449) 00:07:33.194 6326.745 - 6351.951: 56.6570% ( 452) 00:07:33.194 6351.951 - 6377.157: 58.8726% ( 441) 00:07:33.194 6377.157 - 6402.363: 61.1334% ( 450) 00:07:33.195 6402.363 - 6427.569: 63.4697% ( 465) 00:07:33.195 6427.569 - 6452.775: 65.5999% ( 424) 00:07:33.195 6452.775 - 6503.188: 70.1869% ( 913) 00:07:33.195 6503.188 - 6553.600: 74.7639% ( 911) 00:07:33.195 6553.600 - 6604.012: 79.1650% ( 876) 00:07:33.195 6604.012 - 6654.425: 83.3149% ( 826) 00:07:33.195 6654.425 - 6704.837: 86.8318% ( 700) 00:07:33.195 6704.837 - 6755.249: 89.2484% ( 481) 00:07:33.195 6755.249 - 6805.662: 90.6200% ( 273) 00:07:33.195 6805.662 - 6856.074: 91.3284% ( 141) 00:07:33.195 6856.074 - 6906.486: 91.7353% ( 81) 00:07:33.195 6906.486 - 6956.898: 92.0117% ( 55) 00:07:33.195 6956.898 - 7007.311: 92.2930% ( 56) 00:07:33.195 7007.311 - 7057.723: 92.5643% ( 54) 00:07:33.195 7057.723 - 7108.135: 92.8457% ( 56) 00:07:33.195 7108.135 - 7158.548: 93.0868% ( 48) 00:07:33.195 7158.548 - 7208.960: 93.3380% ( 50) 00:07:33.195 7208.960 - 7259.372: 93.5842% ( 49) 00:07:33.195 7259.372 - 7309.785: 93.7952% ( 42) 00:07:33.195 7309.785 - 7360.197: 93.9912% ( 39) 00:07:33.195 7360.197 - 7410.609: 94.1620% ( 34) 00:07:33.195 7410.609 - 7461.022: 94.3077% ( 29) 00:07:33.195 7461.022 - 7511.434: 94.4584% ( 30) 00:07:33.195 7511.434 - 7561.846: 94.6141% ( 31) 00:07:33.195 7561.846 - 7612.258: 94.8051% ( 38) 00:07:33.195 7612.258 - 7662.671: 94.9508% ( 29) 00:07:33.195 7662.671 - 7713.083: 95.0814% ( 26) 00:07:33.195 7713.083 - 7763.495: 95.2723% ( 38) 00:07:33.195 7763.495 - 7813.908: 95.4381% ( 33) 00:07:33.195 7813.908 - 7864.320: 95.5587% ( 24) 00:07:33.195 7864.320 - 7914.732: 95.7044% ( 29) 00:07:33.195 7914.732 - 7965.145: 95.8451% ( 28) 00:07:33.195 7965.145 - 8015.557: 95.9807% ( 27) 00:07:33.195 8015.557 - 8065.969: 96.1415% ( 32) 00:07:33.195 8065.969 - 8116.382: 96.2721% ( 26) 00:07:33.195 8116.382 - 8166.794: 96.4228% ( 30) 00:07:33.196 8166.794 - 8217.206: 96.5484% ( 25) 00:07:33.196 8217.206 - 8267.618: 96.6690% ( 24) 00:07:33.196 8267.618 - 8318.031: 96.7896% ( 24) 00:07:33.196 8318.031 - 8368.443: 96.8951% ( 21) 00:07:33.196 8368.443 - 8418.855: 96.9805% ( 17) 00:07:33.196 8418.855 - 8469.268: 97.0760% ( 19) 00:07:33.196 8469.268 - 8519.680: 97.1564% ( 16) 00:07:33.196 8519.680 - 8570.092: 97.2418% ( 17) 00:07:33.196 8570.092 - 8620.505: 97.3221% ( 16) 00:07:33.196 8620.505 - 8670.917: 97.4126% ( 18) 00:07:33.196 8670.917 - 8721.329: 97.4779% ( 13) 00:07:33.196 8721.329 - 8771.742: 97.5332% ( 11) 00:07:33.196 8771.742 - 8822.154: 97.5834% ( 10) 00:07:33.196 8822.154 - 8872.566: 97.6437% ( 12) 00:07:33.196 8872.566 - 8922.978: 97.6889% ( 9) 00:07:33.196 8922.978 - 8973.391: 97.7241% ( 7) 00:07:33.196 8973.391 - 9023.803: 97.7643% ( 8) 00:07:33.196 9023.803 - 9074.215: 97.8045% ( 8) 00:07:33.196 9074.215 - 9124.628: 97.8497% ( 9) 00:07:33.196 9124.628 - 9175.040: 97.8698% ( 4) 00:07:33.196 9175.040 - 9225.452: 97.9049% ( 7) 00:07:33.196 9225.452 - 9275.865: 97.9351% ( 6) 00:07:33.196 9275.865 - 9326.277: 97.9703% ( 7) 00:07:33.196 9326.277 - 9376.689: 98.0155% ( 9) 00:07:33.196 9376.689 - 9427.102: 98.0305% ( 3) 00:07:33.196 9427.102 - 9477.514: 98.0506% ( 4) 00:07:33.196 9477.514 - 9527.926: 98.0758% ( 5) 00:07:33.196 9527.926 - 9578.338: 98.1361% ( 12) 00:07:33.196 9578.338 - 9628.751: 98.1762% ( 8) 00:07:33.196 9628.751 - 9679.163: 98.2365% ( 12) 00:07:33.196 9679.163 - 9729.575: 98.2968% ( 12) 00:07:33.196 9729.575 - 9779.988: 98.3621% ( 13) 00:07:33.196 9779.988 - 9830.400: 98.4074% ( 9) 00:07:33.196 9830.400 - 9880.812: 98.4626% ( 11) 00:07:33.196 9880.812 - 9931.225: 98.5179% ( 11) 00:07:33.196 9931.225 - 9981.637: 98.5631% ( 9) 00:07:33.196 9981.637 - 10032.049: 98.5983% ( 7) 00:07:33.196 10032.049 - 10082.462: 98.6586% ( 12) 00:07:33.196 10082.462 - 10132.874: 98.6988% ( 8) 00:07:33.196 10132.874 - 10183.286: 98.7490% ( 10) 00:07:33.196 10183.286 - 10233.698: 98.7942% ( 9) 00:07:33.196 10233.698 - 10284.111: 98.8495% ( 11) 00:07:33.196 10284.111 - 10334.523: 98.8897% ( 8) 00:07:33.196 10334.523 - 10384.935: 98.9399% ( 10) 00:07:33.196 10384.935 - 10435.348: 98.9902% ( 10) 00:07:33.196 10435.348 - 10485.760: 99.0354% ( 9) 00:07:33.196 10485.760 - 10536.172: 99.0806% ( 9) 00:07:33.196 10536.172 - 10586.585: 99.1258% ( 9) 00:07:33.197 10586.585 - 10636.997: 99.1660% ( 8) 00:07:33.197 10636.997 - 10687.409: 99.1811% ( 3) 00:07:33.197 10687.409 - 10737.822: 99.1911% ( 2) 00:07:33.197 10737.822 - 10788.234: 99.2012% ( 2) 00:07:33.197 10788.234 - 10838.646: 99.2062% ( 1) 00:07:33.197 10838.646 - 10889.058: 99.2112% ( 1) 00:07:33.197 10889.058 - 10939.471: 99.2162% ( 1) 00:07:33.197 10989.883 - 11040.295: 99.2313% ( 3) 00:07:33.197 11040.295 - 11090.708: 99.2363% ( 1) 00:07:33.197 11090.708 - 11141.120: 99.2414% ( 1) 00:07:33.197 11141.120 - 11191.532: 99.2514% ( 2) 00:07:33.197 11191.532 - 11241.945: 99.2564% ( 1) 00:07:33.197 11241.945 - 11292.357: 99.2665% ( 2) 00:07:33.197 11292.357 - 11342.769: 99.2715% ( 1) 00:07:33.197 11393.182 - 11443.594: 99.2866% ( 3) 00:07:33.197 11494.006 - 11544.418: 99.2966% ( 2) 00:07:33.197 11544.418 - 11594.831: 99.3016% ( 1) 00:07:33.197 11594.831 - 11645.243: 99.3117% ( 2) 00:07:33.197 11645.243 - 11695.655: 99.3167% ( 1) 00:07:33.197 11695.655 - 11746.068: 99.3268% ( 2) 00:07:33.197 11746.068 - 11796.480: 99.3318% ( 1) 00:07:33.197 11796.480 - 11846.892: 99.3368% ( 1) 00:07:33.197 11846.892 - 11897.305: 99.3469% ( 2) 00:07:33.197 11897.305 - 11947.717: 99.3519% ( 1) 00:07:33.197 11947.717 - 11998.129: 99.3569% ( 1) 00:07:33.197 13510.498 - 13611.323: 99.3720% ( 3) 00:07:33.197 13611.323 - 13712.148: 99.4122% ( 8) 00:07:33.198 13712.148 - 13812.972: 99.4373% ( 5) 00:07:33.198 13812.972 - 13913.797: 99.4674% ( 6) 00:07:33.198 13913.797 - 14014.622: 99.4976% ( 6) 00:07:33.198 14014.622 - 14115.446: 99.5227% ( 5) 00:07:33.198 14115.446 - 14216.271: 99.5579% ( 7) 00:07:33.198 14216.271 - 14317.095: 99.5830% ( 5) 00:07:33.198 14317.095 - 14417.920: 99.6131% ( 6) 00:07:33.198 14417.920 - 14518.745: 99.6182% ( 1) 00:07:33.198 14619.569 - 14720.394: 99.6483% ( 6) 00:07:33.198 14720.394 - 14821.218: 99.6734% ( 5) 00:07:33.198 14821.218 - 14922.043: 99.6785% ( 1) 00:07:33.198 18854.203 - 18955.028: 99.7136% ( 7) 00:07:33.198 18955.028 - 19055.852: 99.7387% ( 5) 00:07:33.198 19055.852 - 19156.677: 99.7639% ( 5) 00:07:33.198 19156.677 - 19257.502: 99.7940% ( 6) 00:07:33.198 19257.502 - 19358.326: 99.8292% ( 7) 00:07:33.198 19358.326 - 19459.151: 99.8543% ( 5) 00:07:33.198 19459.151 - 19559.975: 99.8844% ( 6) 00:07:33.198 19559.975 - 19660.800: 99.9146% ( 6) 00:07:33.198 19660.800 - 19761.625: 99.9447% ( 6) 00:07:33.198 19761.625 - 19862.449: 99.9799% ( 7) 00:07:33.198 19862.449 - 19963.274: 100.0000% ( 4) 00:07:33.198 00:07:33.198 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:33.198 ============================================================================== 00:07:33.198 Range in us Cumulative IO count 00:07:33.198 4335.458 - 4360.665: 0.0050% ( 1) 00:07:33.198 4360.665 - 4385.871: 0.0151% ( 2) 00:07:33.198 4385.871 - 4411.077: 0.0251% ( 2) 00:07:33.198 4411.077 - 4436.283: 0.0352% ( 2) 00:07:33.198 4436.283 - 4461.489: 0.0502% ( 3) 00:07:33.198 4461.489 - 4486.695: 0.0653% ( 3) 00:07:33.198 4486.695 - 4511.902: 0.0804% ( 3) 00:07:33.198 4511.902 - 4537.108: 0.0904% ( 2) 00:07:33.199 4537.108 - 4562.314: 0.1105% ( 4) 00:07:33.199 4562.314 - 4587.520: 0.1206% ( 2) 00:07:33.199 4587.520 - 4612.726: 0.1357% ( 3) 00:07:33.199 4612.726 - 4637.932: 0.1457% ( 2) 00:07:33.199 4637.932 - 4663.138: 0.1557% ( 2) 00:07:33.199 4663.138 - 4688.345: 0.1708% ( 3) 00:07:33.199 4688.345 - 4713.551: 0.1809% ( 2) 00:07:33.199 4713.551 - 4738.757: 0.1909% ( 2) 00:07:33.199 4738.757 - 4763.963: 0.2010% ( 2) 00:07:33.199 4763.963 - 4789.169: 0.2160% ( 3) 00:07:33.199 4789.169 - 4814.375: 0.2261% ( 2) 00:07:33.199 4814.375 - 4839.582: 0.2361% ( 2) 00:07:33.199 4839.582 - 4864.788: 0.2462% ( 2) 00:07:33.199 4864.788 - 4889.994: 0.2613% ( 3) 00:07:33.199 4889.994 - 4915.200: 0.2713% ( 2) 00:07:33.199 4915.200 - 4940.406: 0.2864% ( 3) 00:07:33.199 4940.406 - 4965.612: 0.2964% ( 2) 00:07:33.199 4965.612 - 4990.818: 0.3065% ( 2) 00:07:33.199 4990.818 - 5016.025: 0.3215% ( 3) 00:07:33.199 5394.117 - 5419.323: 0.3366% ( 3) 00:07:33.199 5419.323 - 5444.529: 0.3467% ( 2) 00:07:33.199 5444.529 - 5469.735: 0.3567% ( 2) 00:07:33.199 5469.735 - 5494.942: 0.3617% ( 1) 00:07:33.199 5494.942 - 5520.148: 0.3718% ( 2) 00:07:33.199 5520.148 - 5545.354: 0.3869% ( 3) 00:07:33.199 5545.354 - 5570.560: 0.3969% ( 2) 00:07:33.199 5570.560 - 5595.766: 0.4070% ( 2) 00:07:33.199 5595.766 - 5620.972: 0.4270% ( 4) 00:07:33.199 5620.972 - 5646.178: 0.4672% ( 8) 00:07:33.199 5646.178 - 5671.385: 0.5476% ( 16) 00:07:33.199 5671.385 - 5696.591: 0.7084% ( 32) 00:07:33.199 5696.591 - 5721.797: 0.9646% ( 51) 00:07:33.199 5721.797 - 5747.003: 1.5123% ( 109) 00:07:33.199 5747.003 - 5772.209: 2.3312% ( 163) 00:07:33.199 5772.209 - 5797.415: 3.4365% ( 220) 00:07:33.199 5797.415 - 5822.622: 4.9538% ( 302) 00:07:33.200 5822.622 - 5847.828: 6.7574% ( 359) 00:07:33.200 5847.828 - 5873.034: 9.0183% ( 450) 00:07:33.200 5873.034 - 5898.240: 11.6308% ( 520) 00:07:33.200 5898.240 - 5923.446: 14.2182% ( 515) 00:07:33.200 5923.446 - 5948.652: 16.9815% ( 550) 00:07:33.200 5948.652 - 5973.858: 19.5689% ( 515) 00:07:33.200 5973.858 - 5999.065: 22.0810% ( 500) 00:07:33.200 5999.065 - 6024.271: 24.5981% ( 501) 00:07:33.200 6024.271 - 6049.477: 27.2106% ( 520) 00:07:33.200 6049.477 - 6074.683: 29.8432% ( 524) 00:07:33.200 6074.683 - 6099.889: 32.5060% ( 530) 00:07:33.200 6099.889 - 6125.095: 35.1336% ( 523) 00:07:33.200 6125.095 - 6150.302: 37.7211% ( 515) 00:07:33.200 6150.302 - 6175.508: 40.4341% ( 540) 00:07:33.200 6175.508 - 6200.714: 43.1320% ( 537) 00:07:33.200 6200.714 - 6225.920: 45.8551% ( 542) 00:07:33.200 6225.920 - 6251.126: 48.4626% ( 519) 00:07:33.200 6251.126 - 6276.332: 51.1304% ( 531) 00:07:33.200 6276.332 - 6301.538: 53.8987% ( 551) 00:07:33.200 6301.538 - 6326.745: 56.5464% ( 527) 00:07:33.200 6326.745 - 6351.951: 59.1288% ( 514) 00:07:33.200 6351.951 - 6377.157: 61.7062% ( 513) 00:07:33.200 6377.157 - 6402.363: 64.3187% ( 520) 00:07:33.200 6402.363 - 6427.569: 66.9313% ( 520) 00:07:33.200 6427.569 - 6452.775: 69.6192% ( 535) 00:07:33.200 6452.775 - 6503.188: 74.8945% ( 1050) 00:07:33.200 6503.188 - 6553.600: 79.9387% ( 1004) 00:07:33.200 6553.600 - 6604.012: 84.4403% ( 896) 00:07:33.200 6604.012 - 6654.425: 87.9170% ( 692) 00:07:33.200 6654.425 - 6704.837: 89.9618% ( 407) 00:07:33.200 6704.837 - 6755.249: 90.8611% ( 179) 00:07:33.200 6755.249 - 6805.662: 91.4138% ( 110) 00:07:33.200 6805.662 - 6856.074: 91.7554% ( 68) 00:07:33.200 6856.074 - 6906.486: 92.0267% ( 54) 00:07:33.200 6906.486 - 6956.898: 92.2679% ( 48) 00:07:33.200 6956.898 - 7007.311: 92.5241% ( 51) 00:07:33.200 7007.311 - 7057.723: 92.7753% ( 50) 00:07:33.200 7057.723 - 7108.135: 93.0014% ( 45) 00:07:33.200 7108.135 - 7158.548: 93.1873% ( 37) 00:07:33.201 7158.548 - 7208.960: 93.3832% ( 39) 00:07:33.201 7208.960 - 7259.372: 93.6093% ( 45) 00:07:33.201 7259.372 - 7309.785: 93.7902% ( 36) 00:07:33.201 7309.785 - 7360.197: 93.9359% ( 29) 00:07:33.201 7360.197 - 7410.609: 94.1117% ( 35) 00:07:33.201 7410.609 - 7461.022: 94.2725% ( 32) 00:07:33.201 7461.022 - 7511.434: 94.4534% ( 36) 00:07:33.201 7511.434 - 7561.846: 94.6443% ( 38) 00:07:33.201 7561.846 - 7612.258: 94.8252% ( 36) 00:07:33.201 7612.258 - 7662.671: 95.0111% ( 37) 00:07:33.201 7662.671 - 7713.083: 95.1819% ( 34) 00:07:33.201 7713.083 - 7763.495: 95.3175% ( 27) 00:07:33.201 7763.495 - 7813.908: 95.4733% ( 31) 00:07:33.201 7813.908 - 7864.320: 95.6441% ( 34) 00:07:33.201 7864.320 - 7914.732: 95.7948% ( 30) 00:07:33.201 7914.732 - 7965.145: 95.9455% ( 30) 00:07:33.201 7965.145 - 8015.557: 96.0912% ( 29) 00:07:33.201 8015.557 - 8065.969: 96.2520% ( 32) 00:07:33.201 8065.969 - 8116.382: 96.4228% ( 34) 00:07:33.201 8116.382 - 8166.794: 96.5736% ( 30) 00:07:33.201 8166.794 - 8217.206: 96.7243% ( 30) 00:07:33.201 8217.206 - 8267.618: 96.8449% ( 24) 00:07:33.201 8267.618 - 8318.031: 96.9453% ( 20) 00:07:33.201 8318.031 - 8368.443: 97.0207% ( 15) 00:07:33.201 8368.443 - 8418.855: 97.0961% ( 15) 00:07:33.201 8418.855 - 8469.268: 97.1714% ( 15) 00:07:33.201 8469.268 - 8519.680: 97.2367% ( 13) 00:07:33.202 8519.680 - 8570.092: 97.3071% ( 14) 00:07:33.202 8570.092 - 8620.505: 97.3724% ( 13) 00:07:33.202 8620.505 - 8670.917: 97.4477% ( 15) 00:07:33.202 8670.917 - 8721.329: 97.5080% ( 12) 00:07:33.202 8721.329 - 8771.742: 97.5734% ( 13) 00:07:33.202 8771.742 - 8822.154: 97.6387% ( 13) 00:07:33.202 8822.154 - 8872.566: 97.6939% ( 11) 00:07:33.202 8872.566 - 8922.978: 97.7341% ( 8) 00:07:33.202 8922.978 - 8973.391: 97.7693% ( 7) 00:07:33.202 8973.391 - 9023.803: 97.8045% ( 7) 00:07:33.202 9023.803 - 9074.215: 97.8296% ( 5) 00:07:33.202 9074.215 - 9124.628: 97.8547% ( 5) 00:07:33.202 9124.628 - 9175.040: 97.8748% ( 4) 00:07:33.202 9175.040 - 9225.452: 97.8949% ( 4) 00:07:33.202 9225.452 - 9275.865: 97.9049% ( 2) 00:07:33.202 9275.865 - 9326.277: 97.9200% ( 3) 00:07:33.202 9326.277 - 9376.689: 97.9301% ( 2) 00:07:33.202 9376.689 - 9427.102: 97.9401% ( 2) 00:07:33.202 9427.102 - 9477.514: 97.9502% ( 2) 00:07:33.202 9477.514 - 9527.926: 97.9602% ( 2) 00:07:33.202 9527.926 - 9578.338: 97.9703% ( 2) 00:07:33.202 9578.338 - 9628.751: 97.9904% ( 4) 00:07:33.202 9628.751 - 9679.163: 98.0155% ( 5) 00:07:33.202 9679.163 - 9729.575: 98.0456% ( 6) 00:07:33.202 9729.575 - 9779.988: 98.0858% ( 8) 00:07:33.202 9779.988 - 9830.400: 98.1662% ( 16) 00:07:33.202 9830.400 - 9880.812: 98.2416% ( 15) 00:07:33.202 9880.812 - 9931.225: 98.3018% ( 12) 00:07:33.202 9931.225 - 9981.637: 98.3772% ( 15) 00:07:33.202 9981.637 - 10032.049: 98.4375% ( 12) 00:07:33.202 10032.049 - 10082.462: 98.5028% ( 13) 00:07:33.202 10082.462 - 10132.874: 98.5681% ( 13) 00:07:33.202 10132.874 - 10183.286: 98.6184% ( 10) 00:07:33.202 10183.286 - 10233.698: 98.6787% ( 12) 00:07:33.202 10233.698 - 10284.111: 98.7339% ( 11) 00:07:33.202 10284.111 - 10334.523: 98.7942% ( 12) 00:07:33.203 10334.523 - 10384.935: 98.8495% ( 11) 00:07:33.203 10384.935 - 10435.348: 98.9047% ( 11) 00:07:33.203 10435.348 - 10485.760: 98.9650% ( 12) 00:07:33.203 10485.760 - 10536.172: 99.0052% ( 8) 00:07:33.203 10536.172 - 10586.585: 99.0504% ( 9) 00:07:33.203 10586.585 - 10636.997: 99.0906% ( 8) 00:07:33.203 10636.997 - 10687.409: 99.1158% ( 5) 00:07:33.203 10687.409 - 10737.822: 99.1359% ( 4) 00:07:33.203 10737.822 - 10788.234: 99.1610% ( 5) 00:07:33.203 10788.234 - 10838.646: 99.1861% ( 5) 00:07:33.203 10838.646 - 10889.058: 99.2062% ( 4) 00:07:33.203 10889.058 - 10939.471: 99.2363% ( 6) 00:07:33.203 10939.471 - 10989.883: 99.2514% ( 3) 00:07:33.203 10989.883 - 11040.295: 99.2715% ( 4) 00:07:33.203 11040.295 - 11090.708: 99.2816% ( 2) 00:07:33.203 11090.708 - 11141.120: 99.2916% ( 2) 00:07:33.203 11141.120 - 11191.532: 99.3016% ( 2) 00:07:33.203 11191.532 - 11241.945: 99.3117% ( 2) 00:07:33.203 11241.945 - 11292.357: 99.3217% ( 2) 00:07:33.203 11292.357 - 11342.769: 99.3318% ( 2) 00:07:33.203 11342.769 - 11393.182: 99.3368% ( 1) 00:07:33.203 11393.182 - 11443.594: 99.3469% ( 2) 00:07:33.203 11443.594 - 11494.006: 99.3569% ( 2) 00:07:33.203 13611.323 - 13712.148: 99.3820% ( 5) 00:07:33.203 13712.148 - 13812.972: 99.4172% ( 7) 00:07:33.203 13812.972 - 13913.797: 99.4524% ( 7) 00:07:33.203 13913.797 - 14014.622: 99.4825% ( 6) 00:07:33.203 14014.622 - 14115.446: 99.5177% ( 7) 00:07:33.203 14115.446 - 14216.271: 99.5478% ( 6) 00:07:33.203 14216.271 - 14317.095: 99.5830% ( 7) 00:07:33.203 14317.095 - 14417.920: 99.6182% ( 7) 00:07:33.203 14417.920 - 14518.745: 99.6533% ( 7) 00:07:33.204 14518.745 - 14619.569: 99.6785% ( 5) 00:07:33.204 18148.431 - 18249.255: 99.6935% ( 3) 00:07:33.204 18249.255 - 18350.080: 99.7237% ( 6) 00:07:33.204 18350.080 - 18450.905: 99.7538% ( 6) 00:07:33.204 18450.905 - 18551.729: 99.7840% ( 6) 00:07:33.204 18551.729 - 18652.554: 99.8191% ( 7) 00:07:33.204 18652.554 - 18753.378: 99.8543% ( 7) 00:07:33.204 18753.378 - 18854.203: 99.8895% ( 7) 00:07:33.204 18854.203 - 18955.028: 99.9196% ( 6) 00:07:33.204 18955.028 - 19055.852: 99.9548% ( 7) 00:07:33.204 19055.852 - 19156.677: 99.9900% ( 7) 00:07:33.204 19156.677 - 19257.502: 100.0000% ( 2) 00:07:33.204 00:07:33.204 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:33.204 ============================================================================== 00:07:33.204 Range in us Cumulative IO count 00:07:33.204 3755.717 - 3780.923: 0.0050% ( 1) 00:07:33.204 3780.923 - 3806.129: 0.0151% ( 2) 00:07:33.204 3806.129 - 3831.335: 0.0251% ( 2) 00:07:33.204 3831.335 - 3856.542: 0.0352% ( 2) 00:07:33.204 3856.542 - 3881.748: 0.0502% ( 3) 00:07:33.204 3881.748 - 3906.954: 0.0603% ( 2) 00:07:33.204 3906.954 - 3932.160: 0.0804% ( 4) 00:07:33.204 3932.160 - 3957.366: 0.0904% ( 2) 00:07:33.204 3957.366 - 3982.572: 0.1005% ( 2) 00:07:33.204 3982.572 - 4007.778: 0.1156% ( 3) 00:07:33.204 4007.778 - 4032.985: 0.1256% ( 2) 00:07:33.204 4032.985 - 4058.191: 0.1357% ( 2) 00:07:33.204 4058.191 - 4083.397: 0.1507% ( 3) 00:07:33.204 4083.397 - 4108.603: 0.1608% ( 2) 00:07:33.204 4108.603 - 4133.809: 0.1708% ( 2) 00:07:33.204 4133.809 - 4159.015: 0.1809% ( 2) 00:07:33.204 4159.015 - 4184.222: 0.1909% ( 2) 00:07:33.204 4184.222 - 4209.428: 0.2010% ( 2) 00:07:33.204 4209.428 - 4234.634: 0.2110% ( 2) 00:07:33.204 4234.634 - 4259.840: 0.2211% ( 2) 00:07:33.204 4259.840 - 4285.046: 0.2311% ( 2) 00:07:33.204 4285.046 - 4310.252: 0.2462% ( 3) 00:07:33.204 4310.252 - 4335.458: 0.2562% ( 2) 00:07:33.204 4335.458 - 4360.665: 0.2713% ( 3) 00:07:33.205 4360.665 - 4385.871: 0.2814% ( 2) 00:07:33.205 4385.871 - 4411.077: 0.2964% ( 3) 00:07:33.205 4411.077 - 4436.283: 0.3065% ( 2) 00:07:33.205 4436.283 - 4461.489: 0.3165% ( 2) 00:07:33.205 4461.489 - 4486.695: 0.3215% ( 1) 00:07:33.205 5343.705 - 5368.911: 0.3567% ( 7) 00:07:33.205 5368.911 - 5394.117: 0.3668% ( 2) 00:07:33.205 5394.117 - 5419.323: 0.3768% ( 2) 00:07:33.205 5419.323 - 5444.529: 0.3869% ( 2) 00:07:33.205 5444.529 - 5469.735: 0.4019% ( 3) 00:07:33.205 5469.735 - 5494.942: 0.4120% ( 2) 00:07:33.205 5494.942 - 5520.148: 0.4220% ( 2) 00:07:33.205 5520.148 - 5545.354: 0.4371% ( 3) 00:07:33.205 5545.354 - 5570.560: 0.4471% ( 2) 00:07:33.205 5570.560 - 5595.766: 0.4572% ( 2) 00:07:33.205 5595.766 - 5620.972: 0.4823% ( 5) 00:07:33.205 5620.972 - 5646.178: 0.5074% ( 5) 00:07:33.205 5646.178 - 5671.385: 0.5828% ( 15) 00:07:33.205 5671.385 - 5696.591: 0.7637% ( 36) 00:07:33.205 5696.591 - 5721.797: 0.9847% ( 44) 00:07:33.205 5721.797 - 5747.003: 1.3615% ( 75) 00:07:33.205 5747.003 - 5772.209: 2.1001% ( 147) 00:07:33.205 5772.209 - 5797.415: 3.2757% ( 234) 00:07:33.205 5797.415 - 5822.622: 4.8282% ( 309) 00:07:33.205 5822.622 - 5847.828: 6.7675% ( 386) 00:07:33.205 5847.828 - 5873.034: 9.0133% ( 447) 00:07:33.205 5873.034 - 5898.240: 11.5555% ( 506) 00:07:33.205 5898.240 - 5923.446: 14.1730% ( 521) 00:07:33.205 5923.446 - 5948.652: 16.7454% ( 512) 00:07:33.205 5948.652 - 5973.858: 19.3428% ( 517) 00:07:33.205 5973.858 - 5999.065: 21.8850% ( 506) 00:07:33.205 5999.065 - 6024.271: 24.5679% ( 534) 00:07:33.205 6024.271 - 6049.477: 27.2709% ( 538) 00:07:33.205 6049.477 - 6074.683: 29.8483% ( 513) 00:07:33.205 6074.683 - 6099.889: 32.5111% ( 530) 00:07:33.205 6099.889 - 6125.095: 35.1387% ( 523) 00:07:33.205 6125.095 - 6150.302: 37.8316% ( 536) 00:07:33.205 6150.302 - 6175.508: 40.4592% ( 523) 00:07:33.205 6175.508 - 6200.714: 43.1471% ( 535) 00:07:33.206 6200.714 - 6225.920: 45.8049% ( 529) 00:07:33.206 6225.920 - 6251.126: 48.3772% ( 512) 00:07:33.206 6251.126 - 6276.332: 51.0651% ( 535) 00:07:33.206 6276.332 - 6301.538: 53.7430% ( 533) 00:07:33.206 6301.538 - 6326.745: 56.3103% ( 511) 00:07:33.206 6326.745 - 6351.951: 58.9932% ( 534) 00:07:33.206 6351.951 - 6377.157: 61.6007% ( 519) 00:07:33.206 6377.157 - 6402.363: 64.1881% ( 515) 00:07:33.206 6402.363 - 6427.569: 66.8810% ( 536) 00:07:33.206 6427.569 - 6452.775: 69.5589% ( 533) 00:07:33.206 6452.775 - 6503.188: 74.7890% ( 1041) 00:07:33.206 6503.188 - 6553.600: 79.9186% ( 1021) 00:07:33.206 6553.600 - 6604.012: 84.4705% ( 906) 00:07:33.206 6604.012 - 6654.425: 87.8567% ( 674) 00:07:33.206 6654.425 - 6704.837: 89.8412% ( 395) 00:07:33.206 6704.837 - 6755.249: 90.7707% ( 185) 00:07:33.206 6755.249 - 6805.662: 91.3434% ( 114) 00:07:33.206 6805.662 - 6856.074: 91.7755% ( 86) 00:07:33.206 6856.074 - 6906.486: 92.1222% ( 69) 00:07:33.206 6906.486 - 6956.898: 92.4437% ( 64) 00:07:33.206 6956.898 - 7007.311: 92.7803% ( 67) 00:07:33.207 7007.311 - 7057.723: 93.1119% ( 66) 00:07:33.207 7057.723 - 7108.135: 93.4134% ( 60) 00:07:33.207 7108.135 - 7158.548: 93.6998% ( 57) 00:07:33.207 7158.548 - 7208.960: 93.9711% ( 54) 00:07:33.207 7208.960 - 7259.372: 94.2072% ( 47) 00:07:33.207 7259.372 - 7309.785: 94.3881% ( 36) 00:07:33.207 7309.785 - 7360.197: 94.5790% ( 38) 00:07:33.207 7360.197 - 7410.609: 94.7749% ( 39) 00:07:33.207 7410.609 - 7461.022: 94.9558% ( 36) 00:07:33.207 7461.022 - 7511.434: 95.1517% ( 39) 00:07:33.207 7511.434 - 7561.846: 95.3125% ( 32) 00:07:33.207 7561.846 - 7612.258: 95.4783% ( 33) 00:07:33.207 7612.258 - 7662.671: 95.5838% ( 21) 00:07:33.207 7662.671 - 7713.083: 95.6893% ( 21) 00:07:33.207 7713.083 - 7763.495: 95.8149% ( 25) 00:07:33.207 7763.495 - 7813.908: 95.9254% ( 22) 00:07:33.207 7813.908 - 7864.320: 96.0360% ( 22) 00:07:33.207 7864.320 - 7914.732: 96.1867% ( 30) 00:07:33.207 7914.732 - 7965.145: 96.3123% ( 25) 00:07:33.207 7965.145 - 8015.557: 96.4228% ( 22) 00:07:33.207 8015.557 - 8065.969: 96.5133% ( 18) 00:07:33.207 8065.969 - 8116.382: 96.6037% ( 18) 00:07:33.207 8116.382 - 8166.794: 96.6590% ( 11) 00:07:33.207 8166.794 - 8217.206: 96.7343% ( 15) 00:07:33.207 8217.206 - 8267.618: 96.7896% ( 11) 00:07:33.207 8267.618 - 8318.031: 96.8449% ( 11) 00:07:33.207 8318.031 - 8368.443: 96.9051% ( 12) 00:07:33.207 8368.443 - 8418.855: 96.9453% ( 8) 00:07:33.207 8418.855 - 8469.268: 96.9805% ( 7) 00:07:33.207 8469.268 - 8519.680: 97.0207% ( 8) 00:07:33.207 8519.680 - 8570.092: 97.0609% ( 8) 00:07:33.207 8570.092 - 8620.505: 97.0961% ( 7) 00:07:33.207 8620.505 - 8670.917: 97.1463% ( 10) 00:07:33.207 8670.917 - 8721.329: 97.1815% ( 7) 00:07:33.207 8721.329 - 8771.742: 97.2267% ( 9) 00:07:33.207 8771.742 - 8822.154: 97.2769% ( 10) 00:07:33.207 8822.154 - 8872.566: 97.3322% ( 11) 00:07:33.207 8872.566 - 8922.978: 97.3724% ( 8) 00:07:33.207 8922.978 - 8973.391: 97.4126% ( 8) 00:07:33.207 8973.391 - 9023.803: 97.4427% ( 6) 00:07:33.207 9023.803 - 9074.215: 97.4829% ( 8) 00:07:33.207 9074.215 - 9124.628: 97.5281% ( 9) 00:07:33.207 9124.628 - 9175.040: 97.5734% ( 9) 00:07:33.207 9175.040 - 9225.452: 97.6286% ( 11) 00:07:33.207 9225.452 - 9275.865: 97.6738% ( 9) 00:07:33.207 9275.865 - 9326.277: 97.7291% ( 11) 00:07:33.207 9326.277 - 9376.689: 97.7743% ( 9) 00:07:33.207 9376.689 - 9427.102: 97.8246% ( 10) 00:07:33.207 9427.102 - 9477.514: 97.8748% ( 10) 00:07:33.207 9477.514 - 9527.926: 97.9150% ( 8) 00:07:33.207 9527.926 - 9578.338: 97.9652% ( 10) 00:07:33.207 9578.338 - 9628.751: 98.0105% ( 9) 00:07:33.207 9628.751 - 9679.163: 98.0607% ( 10) 00:07:33.207 9679.163 - 9729.575: 98.1059% ( 9) 00:07:33.207 9729.575 - 9779.988: 98.1612% ( 11) 00:07:33.207 9779.988 - 9830.400: 98.2215% ( 12) 00:07:33.207 9830.400 - 9880.812: 98.3420% ( 24) 00:07:33.207 9880.812 - 9931.225: 98.4074% ( 13) 00:07:33.207 9931.225 - 9981.637: 98.4777% ( 14) 00:07:33.207 9981.637 - 10032.049: 98.5631% ( 17) 00:07:33.207 10032.049 - 10082.462: 98.6133% ( 10) 00:07:33.207 10082.462 - 10132.874: 98.6636% ( 10) 00:07:33.207 10132.874 - 10183.286: 98.7138% ( 10) 00:07:33.207 10183.286 - 10233.698: 98.7641% ( 10) 00:07:33.207 10233.698 - 10284.111: 98.8043% ( 8) 00:07:33.207 10284.111 - 10334.523: 98.8545% ( 10) 00:07:33.207 10334.523 - 10384.935: 98.8947% ( 8) 00:07:33.207 10384.935 - 10435.348: 98.9349% ( 8) 00:07:33.207 10435.348 - 10485.760: 98.9851% ( 10) 00:07:33.207 10485.760 - 10536.172: 99.0253% ( 8) 00:07:33.207 10536.172 - 10586.585: 99.0806% ( 11) 00:07:33.207 10586.585 - 10636.997: 99.1107% ( 6) 00:07:33.207 10636.997 - 10687.409: 99.1610% ( 10) 00:07:33.207 10687.409 - 10737.822: 99.1861% ( 5) 00:07:33.207 10737.822 - 10788.234: 99.2112% ( 5) 00:07:33.207 10788.234 - 10838.646: 99.2414% ( 6) 00:07:33.207 10838.646 - 10889.058: 99.2665% ( 5) 00:07:33.207 10889.058 - 10939.471: 99.2916% ( 5) 00:07:33.207 10939.471 - 10989.883: 99.3167% ( 5) 00:07:33.207 10989.883 - 11040.295: 99.3418% ( 5) 00:07:33.207 11040.295 - 11090.708: 99.3569% ( 3) 00:07:33.207 13510.498 - 13611.323: 99.3619% ( 1) 00:07:33.207 13611.323 - 13712.148: 99.3971% ( 7) 00:07:33.207 13712.148 - 13812.972: 99.4273% ( 6) 00:07:33.207 13812.972 - 13913.797: 99.4624% ( 7) 00:07:33.207 13913.797 - 14014.622: 99.4926% ( 6) 00:07:33.207 14014.622 - 14115.446: 99.5227% ( 6) 00:07:33.207 14115.446 - 14216.271: 99.5579% ( 7) 00:07:33.207 14216.271 - 14317.095: 99.5930% ( 7) 00:07:33.207 14317.095 - 14417.920: 99.6232% ( 6) 00:07:33.207 14417.920 - 14518.745: 99.6584% ( 7) 00:07:33.207 14518.745 - 14619.569: 99.6785% ( 4) 00:07:33.207 18047.606 - 18148.431: 99.6835% ( 1) 00:07:33.207 18148.431 - 18249.255: 99.7136% ( 6) 00:07:33.207 18249.255 - 18350.080: 99.7438% ( 6) 00:07:33.207 18350.080 - 18450.905: 99.7789% ( 7) 00:07:33.207 18450.905 - 18551.729: 99.8141% ( 7) 00:07:33.207 18551.729 - 18652.554: 99.8392% ( 5) 00:07:33.207 18652.554 - 18753.378: 99.8744% ( 7) 00:07:33.207 18753.378 - 18854.203: 99.9096% ( 7) 00:07:33.207 18854.203 - 18955.028: 99.9447% ( 7) 00:07:33.207 18955.028 - 19055.852: 99.9799% ( 7) 00:07:33.207 19055.852 - 19156.677: 100.0000% ( 4) 00:07:33.207 00:07:33.207 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:33.207 ============================================================================== 00:07:33.207 Range in us Cumulative IO count 00:07:33.207 3579.274 - 3604.480: 0.0251% ( 5) 00:07:33.207 3604.480 - 3629.686: 0.0352% ( 2) 00:07:33.207 3629.686 - 3654.892: 0.0452% ( 2) 00:07:33.207 3654.892 - 3680.098: 0.0553% ( 2) 00:07:33.207 3680.098 - 3705.305: 0.0703% ( 3) 00:07:33.207 3705.305 - 3730.511: 0.0804% ( 2) 00:07:33.207 3730.511 - 3755.717: 0.0955% ( 3) 00:07:33.207 3755.717 - 3780.923: 0.1055% ( 2) 00:07:33.207 3780.923 - 3806.129: 0.1156% ( 2) 00:07:33.207 3806.129 - 3831.335: 0.1306% ( 3) 00:07:33.207 3831.335 - 3856.542: 0.1407% ( 2) 00:07:33.207 3856.542 - 3881.748: 0.1557% ( 3) 00:07:33.207 3881.748 - 3906.954: 0.1658% ( 2) 00:07:33.207 3906.954 - 3932.160: 0.1758% ( 2) 00:07:33.207 3932.160 - 3957.366: 0.1859% ( 2) 00:07:33.207 3957.366 - 3982.572: 0.2010% ( 3) 00:07:33.207 3982.572 - 4007.778: 0.2110% ( 2) 00:07:33.207 4007.778 - 4032.985: 0.2261% ( 3) 00:07:33.207 4032.985 - 4058.191: 0.2361% ( 2) 00:07:33.207 4058.191 - 4083.397: 0.2462% ( 2) 00:07:33.207 4083.397 - 4108.603: 0.2613% ( 3) 00:07:33.207 4108.603 - 4133.809: 0.2713% ( 2) 00:07:33.207 4133.809 - 4159.015: 0.2864% ( 3) 00:07:33.207 4159.015 - 4184.222: 0.2964% ( 2) 00:07:33.207 4184.222 - 4209.428: 0.3115% ( 3) 00:07:33.207 4209.428 - 4234.634: 0.3215% ( 2) 00:07:33.207 5116.849 - 5142.055: 0.3316% ( 2) 00:07:33.207 5142.055 - 5167.262: 0.3467% ( 3) 00:07:33.208 5167.262 - 5192.468: 0.3567% ( 2) 00:07:33.208 5192.468 - 5217.674: 0.3668% ( 2) 00:07:33.208 5217.674 - 5242.880: 0.3768% ( 2) 00:07:33.208 5242.880 - 5268.086: 0.3919% ( 3) 00:07:33.208 5268.086 - 5293.292: 0.4019% ( 2) 00:07:33.208 5293.292 - 5318.498: 0.4220% ( 4) 00:07:33.208 5318.498 - 5343.705: 0.4321% ( 2) 00:07:33.208 5343.705 - 5368.911: 0.4421% ( 2) 00:07:33.208 5368.911 - 5394.117: 0.4522% ( 2) 00:07:33.208 5394.117 - 5419.323: 0.4622% ( 2) 00:07:33.208 5419.323 - 5444.529: 0.4723% ( 2) 00:07:33.208 5444.529 - 5469.735: 0.4873% ( 3) 00:07:33.208 5469.735 - 5494.942: 0.4974% ( 2) 00:07:33.208 5494.942 - 5520.148: 0.5074% ( 2) 00:07:33.208 5520.148 - 5545.354: 0.5175% ( 2) 00:07:33.208 5545.354 - 5570.560: 0.5326% ( 3) 00:07:33.208 5570.560 - 5595.766: 0.5426% ( 2) 00:07:33.208 5595.766 - 5620.972: 0.5577% ( 3) 00:07:33.208 5620.972 - 5646.178: 0.6129% ( 11) 00:07:33.208 5646.178 - 5671.385: 0.6833% ( 14) 00:07:33.208 5671.385 - 5696.591: 0.7888% ( 21) 00:07:33.208 5696.591 - 5721.797: 1.0249% ( 47) 00:07:33.208 5721.797 - 5747.003: 1.5223% ( 99) 00:07:33.208 5747.003 - 5772.209: 2.2006% ( 135) 00:07:33.208 5772.209 - 5797.415: 3.1752% ( 194) 00:07:33.208 5797.415 - 5822.622: 4.7428% ( 312) 00:07:33.208 5822.622 - 5847.828: 6.7223% ( 394) 00:07:33.208 5847.828 - 5873.034: 9.0484% ( 463) 00:07:33.208 5873.034 - 5898.240: 11.5203% ( 492) 00:07:33.208 5898.240 - 5923.446: 14.1580% ( 525) 00:07:33.208 5923.446 - 5948.652: 16.8057% ( 527) 00:07:33.208 5948.652 - 5973.858: 19.4182% ( 520) 00:07:33.208 5973.858 - 5999.065: 21.9554% ( 505) 00:07:33.208 5999.065 - 6024.271: 24.4926% ( 505) 00:07:33.208 6024.271 - 6049.477: 27.1051% ( 520) 00:07:33.208 6049.477 - 6074.683: 29.7578% ( 528) 00:07:33.208 6074.683 - 6099.889: 32.4106% ( 528) 00:07:33.208 6099.889 - 6125.095: 35.0784% ( 531) 00:07:33.208 6125.095 - 6150.302: 37.7562% ( 533) 00:07:33.208 6150.302 - 6175.508: 40.4190% ( 530) 00:07:33.208 6175.508 - 6200.714: 43.0969% ( 533) 00:07:33.208 6200.714 - 6225.920: 45.7496% ( 528) 00:07:33.208 6225.920 - 6251.126: 48.4425% ( 536) 00:07:33.208 6251.126 - 6276.332: 51.0852% ( 526) 00:07:33.208 6276.332 - 6301.538: 53.7229% ( 525) 00:07:33.208 6301.538 - 6326.745: 56.3153% ( 516) 00:07:33.208 6326.745 - 6351.951: 58.9831% ( 531) 00:07:33.208 6351.951 - 6377.157: 61.6258% ( 526) 00:07:33.208 6377.157 - 6402.363: 64.2383% ( 520) 00:07:33.208 6402.363 - 6427.569: 66.9664% ( 543) 00:07:33.208 6427.569 - 6452.775: 69.6242% ( 529) 00:07:33.208 6452.775 - 6503.188: 74.8242% ( 1035) 00:07:33.208 6503.188 - 6553.600: 80.0392% ( 1038) 00:07:33.208 6553.600 - 6604.012: 84.6915% ( 926) 00:07:33.208 6604.012 - 6654.425: 87.9924% ( 657) 00:07:33.208 6654.425 - 6704.837: 89.9719% ( 394) 00:07:33.208 6704.837 - 6755.249: 90.9264% ( 190) 00:07:33.208 6755.249 - 6805.662: 91.5143% ( 117) 00:07:33.208 6805.662 - 6856.074: 91.9664% ( 90) 00:07:33.208 6856.074 - 6906.486: 92.2679% ( 60) 00:07:33.208 6906.486 - 6956.898: 92.5844% ( 63) 00:07:33.208 6956.898 - 7007.311: 92.9110% ( 65) 00:07:33.208 7007.311 - 7057.723: 93.1773% ( 53) 00:07:33.208 7057.723 - 7108.135: 93.4385% ( 52) 00:07:33.208 7108.135 - 7158.548: 93.7349% ( 59) 00:07:33.208 7158.548 - 7208.960: 94.0113% ( 55) 00:07:33.208 7208.960 - 7259.372: 94.2876% ( 55) 00:07:33.208 7259.372 - 7309.785: 94.4885% ( 40) 00:07:33.208 7309.785 - 7360.197: 94.6895% ( 40) 00:07:33.208 7360.197 - 7410.609: 94.8654% ( 35) 00:07:33.208 7410.609 - 7461.022: 95.0311% ( 33) 00:07:33.208 7461.022 - 7511.434: 95.2170% ( 37) 00:07:33.208 7511.434 - 7561.846: 95.3778% ( 32) 00:07:33.208 7561.846 - 7612.258: 95.5034% ( 25) 00:07:33.208 7612.258 - 7662.671: 95.6441% ( 28) 00:07:33.208 7662.671 - 7713.083: 95.7697% ( 25) 00:07:33.208 7713.083 - 7763.495: 95.8852% ( 23) 00:07:33.208 7763.495 - 7813.908: 96.0058% ( 24) 00:07:33.208 7813.908 - 7864.320: 96.1214% ( 23) 00:07:33.208 7864.320 - 7914.732: 96.2269% ( 21) 00:07:33.208 7914.732 - 7965.145: 96.3173% ( 18) 00:07:33.208 7965.145 - 8015.557: 96.4078% ( 18) 00:07:33.208 8015.557 - 8065.969: 96.5032% ( 19) 00:07:33.208 8065.969 - 8116.382: 96.5836% ( 16) 00:07:33.208 8116.382 - 8166.794: 96.6590% ( 15) 00:07:33.208 8166.794 - 8217.206: 96.7243% ( 13) 00:07:33.208 8217.206 - 8267.618: 96.7795% ( 11) 00:07:33.208 8267.618 - 8318.031: 96.8298% ( 10) 00:07:33.208 8318.031 - 8368.443: 96.8901% ( 12) 00:07:33.208 8368.443 - 8418.855: 96.9353% ( 9) 00:07:33.208 8418.855 - 8469.268: 96.9906% ( 11) 00:07:33.208 8469.268 - 8519.680: 97.0458% ( 11) 00:07:33.208 8519.680 - 8570.092: 97.0910% ( 9) 00:07:33.208 8570.092 - 8620.505: 97.1312% ( 8) 00:07:33.208 8620.505 - 8670.917: 97.1764% ( 9) 00:07:33.208 8670.917 - 8721.329: 97.2267% ( 10) 00:07:33.208 8721.329 - 8771.742: 97.2870% ( 12) 00:07:33.208 8771.742 - 8822.154: 97.3372% ( 10) 00:07:33.208 8822.154 - 8872.566: 97.3774% ( 8) 00:07:33.208 8872.566 - 8922.978: 97.4126% ( 7) 00:07:33.208 8922.978 - 8973.391: 97.4427% ( 6) 00:07:33.208 8973.391 - 9023.803: 97.4779% ( 7) 00:07:33.208 9023.803 - 9074.215: 97.5080% ( 6) 00:07:33.208 9074.215 - 9124.628: 97.5683% ( 12) 00:07:33.208 9124.628 - 9175.040: 97.6135% ( 9) 00:07:33.208 9175.040 - 9225.452: 97.6688% ( 11) 00:07:33.208 9225.452 - 9275.865: 97.7191% ( 10) 00:07:33.208 9275.865 - 9326.277: 97.7643% ( 9) 00:07:33.208 9326.277 - 9376.689: 97.8095% ( 9) 00:07:33.208 9376.689 - 9427.102: 97.8597% ( 10) 00:07:33.208 9427.102 - 9477.514: 97.9502% ( 18) 00:07:33.208 9477.514 - 9527.926: 97.9803% ( 6) 00:07:33.208 9527.926 - 9578.338: 98.0205% ( 8) 00:07:33.208 9578.338 - 9628.751: 98.0607% ( 8) 00:07:33.208 9628.751 - 9679.163: 98.0959% ( 7) 00:07:33.208 9679.163 - 9729.575: 98.1863% ( 18) 00:07:33.208 9729.575 - 9779.988: 98.2265% ( 8) 00:07:33.208 9779.988 - 9830.400: 98.2767% ( 10) 00:07:33.208 9830.400 - 9880.812: 98.3370% ( 12) 00:07:33.208 9880.812 - 9931.225: 98.3822% ( 9) 00:07:33.208 9931.225 - 9981.637: 98.4275% ( 9) 00:07:33.208 9981.637 - 10032.049: 98.4727% ( 9) 00:07:33.208 10032.049 - 10082.462: 98.5179% ( 9) 00:07:33.208 10082.462 - 10132.874: 98.5681% ( 10) 00:07:33.208 10132.874 - 10183.286: 98.6184% ( 10) 00:07:33.208 10183.286 - 10233.698: 98.6586% ( 8) 00:07:33.208 10233.698 - 10284.111: 98.7189% ( 12) 00:07:33.208 10284.111 - 10334.523: 98.7741% ( 11) 00:07:33.208 10334.523 - 10384.935: 98.8143% ( 8) 00:07:33.208 10384.935 - 10435.348: 98.8595% ( 9) 00:07:33.208 10435.348 - 10485.760: 98.9098% ( 10) 00:07:33.208 10485.760 - 10536.172: 98.9500% ( 8) 00:07:33.208 10536.172 - 10586.585: 98.9751% ( 5) 00:07:33.208 10586.585 - 10636.997: 99.0153% ( 8) 00:07:33.208 10636.997 - 10687.409: 99.0555% ( 8) 00:07:33.208 10687.409 - 10737.822: 99.0957% ( 8) 00:07:33.208 10737.822 - 10788.234: 99.1308% ( 7) 00:07:33.208 10788.234 - 10838.646: 99.1509% ( 4) 00:07:33.208 10838.646 - 10889.058: 99.1861% ( 7) 00:07:33.208 10889.058 - 10939.471: 99.2012% ( 3) 00:07:33.208 10939.471 - 10989.883: 99.2162% ( 3) 00:07:33.208 10989.883 - 11040.295: 99.2363% ( 4) 00:07:33.208 11040.295 - 11090.708: 99.2514% ( 3) 00:07:33.208 11090.708 - 11141.120: 99.2715% ( 4) 00:07:33.208 11141.120 - 11191.532: 99.2816% ( 2) 00:07:33.208 11191.532 - 11241.945: 99.2966% ( 3) 00:07:33.208 11241.945 - 11292.357: 99.3117% ( 3) 00:07:33.208 11292.357 - 11342.769: 99.3167% ( 1) 00:07:33.208 11342.769 - 11393.182: 99.3318% ( 3) 00:07:33.208 11393.182 - 11443.594: 99.3418% ( 2) 00:07:33.208 11443.594 - 11494.006: 99.3519% ( 2) 00:07:33.208 11494.006 - 11544.418: 99.3569% ( 1) 00:07:33.208 13006.375 - 13107.200: 99.3720% ( 3) 00:07:33.208 13107.200 - 13208.025: 99.4021% ( 6) 00:07:33.208 13208.025 - 13308.849: 99.4373% ( 7) 00:07:33.208 13308.849 - 13409.674: 99.4725% ( 7) 00:07:33.208 13409.674 - 13510.498: 99.5026% ( 6) 00:07:33.208 13510.498 - 13611.323: 99.5328% ( 6) 00:07:33.208 13611.323 - 13712.148: 99.5679% ( 7) 00:07:33.208 13712.148 - 13812.972: 99.5981% ( 6) 00:07:33.208 13812.972 - 13913.797: 99.6332% ( 7) 00:07:33.208 13913.797 - 14014.622: 99.6684% ( 7) 00:07:33.208 14014.622 - 14115.446: 99.6785% ( 2) 00:07:33.208 17442.658 - 17543.483: 99.6885% ( 2) 00:07:33.208 17543.483 - 17644.308: 99.7186% ( 6) 00:07:33.208 17644.308 - 17745.132: 99.7538% ( 7) 00:07:33.208 17745.132 - 17845.957: 99.7840% ( 6) 00:07:33.208 17845.957 - 17946.782: 99.8191% ( 7) 00:07:33.208 17946.782 - 18047.606: 99.8493% ( 6) 00:07:33.208 18047.606 - 18148.431: 99.8844% ( 7) 00:07:33.208 18148.431 - 18249.255: 99.9146% ( 6) 00:07:33.208 18249.255 - 18350.080: 99.9447% ( 6) 00:07:33.208 18350.080 - 18450.905: 99.9799% ( 7) 00:07:33.208 18450.905 - 18551.729: 100.0000% ( 4) 00:07:33.208 00:07:33.208 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:33.208 ============================================================================== 00:07:33.208 Range in us Cumulative IO count 00:07:33.208 3327.212 - 3352.418: 0.0251% ( 5) 00:07:33.208 3352.418 - 3377.625: 0.0452% ( 4) 00:07:33.208 3377.625 - 3402.831: 0.0553% ( 2) 00:07:33.208 3402.831 - 3428.037: 0.0603% ( 1) 00:07:33.208 3428.037 - 3453.243: 0.0754% ( 3) 00:07:33.208 3453.243 - 3478.449: 0.0854% ( 2) 00:07:33.208 3478.449 - 3503.655: 0.0955% ( 2) 00:07:33.208 3503.655 - 3528.862: 0.1105% ( 3) 00:07:33.208 3528.862 - 3554.068: 0.1206% ( 2) 00:07:33.208 3554.068 - 3579.274: 0.1306% ( 2) 00:07:33.209 3579.274 - 3604.480: 0.1457% ( 3) 00:07:33.209 3604.480 - 3629.686: 0.1557% ( 2) 00:07:33.209 3629.686 - 3654.892: 0.1608% ( 1) 00:07:33.209 3654.892 - 3680.098: 0.1708% ( 2) 00:07:33.209 3680.098 - 3705.305: 0.1809% ( 2) 00:07:33.209 3705.305 - 3730.511: 0.1959% ( 3) 00:07:33.209 3730.511 - 3755.717: 0.2060% ( 2) 00:07:33.209 3755.717 - 3780.923: 0.2160% ( 2) 00:07:33.209 3780.923 - 3806.129: 0.2311% ( 3) 00:07:33.209 3806.129 - 3831.335: 0.2412% ( 2) 00:07:33.209 3831.335 - 3856.542: 0.2562% ( 3) 00:07:33.209 3856.542 - 3881.748: 0.2663% ( 2) 00:07:33.209 3881.748 - 3906.954: 0.2814% ( 3) 00:07:33.209 3906.954 - 3932.160: 0.2914% ( 2) 00:07:33.209 3932.160 - 3957.366: 0.3014% ( 2) 00:07:33.209 3957.366 - 3982.572: 0.3115% ( 2) 00:07:33.209 3982.572 - 4007.778: 0.3215% ( 2) 00:07:33.209 4965.612 - 4990.818: 0.3416% ( 4) 00:07:33.209 4990.818 - 5016.025: 0.3517% ( 2) 00:07:33.209 5016.025 - 5041.231: 0.3617% ( 2) 00:07:33.209 5041.231 - 5066.437: 0.3718% ( 2) 00:07:33.209 5066.437 - 5091.643: 0.3818% ( 2) 00:07:33.209 5091.643 - 5116.849: 0.3969% ( 3) 00:07:33.209 5116.849 - 5142.055: 0.4070% ( 2) 00:07:33.209 5142.055 - 5167.262: 0.4220% ( 3) 00:07:33.209 5167.262 - 5192.468: 0.4421% ( 4) 00:07:33.209 5192.468 - 5217.674: 0.4522% ( 2) 00:07:33.209 5217.674 - 5242.880: 0.4672% ( 3) 00:07:33.209 5242.880 - 5268.086: 0.4773% ( 2) 00:07:33.209 5268.086 - 5293.292: 0.4924% ( 3) 00:07:33.209 5293.292 - 5318.498: 0.4974% ( 1) 00:07:33.209 5318.498 - 5343.705: 0.5125% ( 3) 00:07:33.209 5343.705 - 5368.911: 0.5225% ( 2) 00:07:33.209 5368.911 - 5394.117: 0.5376% ( 3) 00:07:33.209 5394.117 - 5419.323: 0.5476% ( 2) 00:07:33.209 5419.323 - 5444.529: 0.5627% ( 3) 00:07:33.209 5444.529 - 5469.735: 0.5727% ( 2) 00:07:33.209 5469.735 - 5494.942: 0.5878% ( 3) 00:07:33.209 5494.942 - 5520.148: 0.5979% ( 2) 00:07:33.209 5520.148 - 5545.354: 0.6079% ( 2) 00:07:33.209 5545.354 - 5570.560: 0.6180% ( 2) 00:07:33.209 5570.560 - 5595.766: 0.6330% ( 3) 00:07:33.209 5595.766 - 5620.972: 0.6481% ( 3) 00:07:33.209 5620.972 - 5646.178: 0.6984% ( 10) 00:07:33.209 5646.178 - 5671.385: 0.7486% ( 10) 00:07:33.209 5671.385 - 5696.591: 0.8943% ( 29) 00:07:33.209 5696.591 - 5721.797: 1.1304% ( 47) 00:07:33.209 5721.797 - 5747.003: 1.4972% ( 73) 00:07:33.209 5747.003 - 5772.209: 2.3312% ( 166) 00:07:33.209 5772.209 - 5797.415: 3.4666% ( 226) 00:07:33.209 5797.415 - 5822.622: 5.0995% ( 325) 00:07:33.209 5822.622 - 5847.828: 6.9835% ( 375) 00:07:33.209 5847.828 - 5873.034: 9.2996% ( 461) 00:07:33.209 5873.034 - 5898.240: 11.6861% ( 475) 00:07:33.209 5898.240 - 5923.446: 14.1781% ( 496) 00:07:33.209 5923.446 - 5948.652: 16.8157% ( 525) 00:07:33.209 5948.652 - 5973.858: 19.3579% ( 506) 00:07:33.209 5973.858 - 5999.065: 21.9855% ( 523) 00:07:33.209 5999.065 - 6024.271: 24.5981% ( 520) 00:07:33.209 6024.271 - 6049.477: 27.2558% ( 529) 00:07:33.209 6049.477 - 6074.683: 29.8885% ( 524) 00:07:33.209 6074.683 - 6099.889: 32.6015% ( 540) 00:07:33.209 6099.889 - 6125.095: 35.2191% ( 521) 00:07:33.209 6125.095 - 6150.302: 37.8316% ( 520) 00:07:33.209 6150.302 - 6175.508: 40.4944% ( 530) 00:07:33.209 6175.508 - 6200.714: 43.1421% ( 527) 00:07:33.209 6200.714 - 6225.920: 45.8049% ( 530) 00:07:33.209 6225.920 - 6251.126: 48.4526% ( 527) 00:07:33.209 6251.126 - 6276.332: 51.0400% ( 515) 00:07:33.209 6276.332 - 6301.538: 53.6676% ( 523) 00:07:33.209 6301.538 - 6326.745: 56.3153% ( 527) 00:07:33.209 6326.745 - 6351.951: 59.0133% ( 537) 00:07:33.209 6351.951 - 6377.157: 61.6409% ( 523) 00:07:33.209 6377.157 - 6402.363: 64.2484% ( 519) 00:07:33.209 6402.363 - 6427.569: 66.9061% ( 529) 00:07:33.209 6427.569 - 6452.775: 69.5840% ( 533) 00:07:33.209 6452.775 - 6503.188: 74.8744% ( 1053) 00:07:33.209 6503.188 - 6553.600: 79.9940% ( 1019) 00:07:33.209 6553.600 - 6604.012: 84.5508% ( 907) 00:07:33.209 6604.012 - 6654.425: 87.9270% ( 672) 00:07:33.209 6654.425 - 6704.837: 90.0372% ( 420) 00:07:33.209 6704.837 - 6755.249: 91.0119% ( 194) 00:07:33.209 6755.249 - 6805.662: 91.6047% ( 118) 00:07:33.209 6805.662 - 6856.074: 91.9966% ( 78) 00:07:33.209 6856.074 - 6906.486: 92.3282% ( 66) 00:07:33.209 6906.486 - 6956.898: 92.6246% ( 59) 00:07:33.209 6956.898 - 7007.311: 92.9210% ( 59) 00:07:33.209 7007.311 - 7057.723: 93.1521% ( 46) 00:07:33.209 7057.723 - 7108.135: 93.3883% ( 47) 00:07:33.209 7108.135 - 7158.548: 93.6093% ( 44) 00:07:33.209 7158.548 - 7208.960: 93.8706% ( 52) 00:07:33.209 7208.960 - 7259.372: 94.1318% ( 52) 00:07:33.209 7259.372 - 7309.785: 94.3730% ( 48) 00:07:33.209 7309.785 - 7360.197: 94.5689% ( 39) 00:07:33.209 7360.197 - 7410.609: 94.7398% ( 34) 00:07:33.209 7410.609 - 7461.022: 94.9457% ( 41) 00:07:33.209 7461.022 - 7511.434: 95.1316% ( 37) 00:07:33.209 7511.434 - 7561.846: 95.3225% ( 38) 00:07:33.209 7561.846 - 7612.258: 95.4883% ( 33) 00:07:33.209 7612.258 - 7662.671: 95.6340% ( 29) 00:07:33.209 7662.671 - 7713.083: 95.7446% ( 22) 00:07:33.209 7713.083 - 7763.495: 95.8802% ( 27) 00:07:33.209 7763.495 - 7813.908: 95.9757% ( 19) 00:07:33.209 7813.908 - 7864.320: 96.0862% ( 22) 00:07:33.209 7864.320 - 7914.732: 96.2018% ( 23) 00:07:33.209 7914.732 - 7965.145: 96.3173% ( 23) 00:07:33.209 7965.145 - 8015.557: 96.4279% ( 22) 00:07:33.209 8015.557 - 8065.969: 96.5384% ( 22) 00:07:33.209 8065.969 - 8116.382: 96.6188% ( 16) 00:07:33.209 8116.382 - 8166.794: 96.6891% ( 14) 00:07:33.209 8166.794 - 8217.206: 96.7494% ( 12) 00:07:33.209 8217.206 - 8267.618: 96.7996% ( 10) 00:07:33.209 8267.618 - 8318.031: 96.8650% ( 13) 00:07:33.209 8318.031 - 8368.443: 96.9252% ( 12) 00:07:33.209 8368.443 - 8418.855: 96.9855% ( 12) 00:07:33.209 8418.855 - 8469.268: 97.0508% ( 13) 00:07:33.209 8469.268 - 8519.680: 97.1162% ( 13) 00:07:33.209 8519.680 - 8570.092: 97.1764% ( 12) 00:07:33.209 8570.092 - 8620.505: 97.2317% ( 11) 00:07:33.209 8620.505 - 8670.917: 97.2719% ( 8) 00:07:33.209 8670.917 - 8721.329: 97.3221% ( 10) 00:07:33.209 8721.329 - 8771.742: 97.3623% ( 8) 00:07:33.209 8771.742 - 8822.154: 97.3975% ( 7) 00:07:33.209 8822.154 - 8872.566: 97.4277% ( 6) 00:07:33.209 8872.566 - 8922.978: 97.4578% ( 6) 00:07:33.209 8922.978 - 8973.391: 97.4829% ( 5) 00:07:33.209 8973.391 - 9023.803: 97.5131% ( 6) 00:07:33.209 9023.803 - 9074.215: 97.5482% ( 7) 00:07:33.209 9074.215 - 9124.628: 97.5934% ( 9) 00:07:33.209 9124.628 - 9175.040: 97.6186% ( 5) 00:07:33.209 9175.040 - 9225.452: 97.6487% ( 6) 00:07:33.209 9225.452 - 9275.865: 97.6839% ( 7) 00:07:33.209 9275.865 - 9326.277: 97.7140% ( 6) 00:07:33.209 9326.277 - 9376.689: 97.7592% ( 9) 00:07:33.209 9376.689 - 9427.102: 97.8195% ( 12) 00:07:33.209 9427.102 - 9477.514: 97.8798% ( 12) 00:07:33.209 9477.514 - 9527.926: 97.9451% ( 13) 00:07:33.209 9527.926 - 9578.338: 98.0255% ( 16) 00:07:33.209 9578.338 - 9628.751: 98.1009% ( 15) 00:07:33.209 9628.751 - 9679.163: 98.1712% ( 14) 00:07:33.209 9679.163 - 9729.575: 98.2466% ( 15) 00:07:33.209 9729.575 - 9779.988: 98.3119% ( 13) 00:07:33.209 9779.988 - 9830.400: 98.3571% ( 9) 00:07:33.209 9830.400 - 9880.812: 98.4174% ( 12) 00:07:33.209 9880.812 - 9931.225: 98.4777% ( 12) 00:07:33.209 9931.225 - 9981.637: 98.5330% ( 11) 00:07:33.209 9981.637 - 10032.049: 98.5832% ( 10) 00:07:33.209 10032.049 - 10082.462: 98.6334% ( 10) 00:07:33.209 10082.462 - 10132.874: 98.6887% ( 11) 00:07:33.209 10132.874 - 10183.286: 98.7490% ( 12) 00:07:33.209 10183.286 - 10233.698: 98.7992% ( 10) 00:07:33.209 10233.698 - 10284.111: 98.8495% ( 10) 00:07:33.209 10284.111 - 10334.523: 98.8846% ( 7) 00:07:33.209 10334.523 - 10384.935: 98.9248% ( 8) 00:07:33.209 10384.935 - 10435.348: 98.9600% ( 7) 00:07:33.209 10435.348 - 10485.760: 98.9801% ( 4) 00:07:33.209 10485.760 - 10536.172: 98.9952% ( 3) 00:07:33.209 10536.172 - 10586.585: 99.0052% ( 2) 00:07:33.209 10586.585 - 10636.997: 99.0153% ( 2) 00:07:33.209 10636.997 - 10687.409: 99.0253% ( 2) 00:07:33.209 10687.409 - 10737.822: 99.0303% ( 1) 00:07:33.209 10737.822 - 10788.234: 99.0605% ( 6) 00:07:33.209 10788.234 - 10838.646: 99.0806% ( 4) 00:07:33.209 10838.646 - 10889.058: 99.1007% ( 4) 00:07:33.209 10889.058 - 10939.471: 99.1107% ( 2) 00:07:33.209 10939.471 - 10989.883: 99.1158% ( 1) 00:07:33.209 10989.883 - 11040.295: 99.1258% ( 2) 00:07:33.209 11040.295 - 11090.708: 99.1359% ( 2) 00:07:33.209 11090.708 - 11141.120: 99.1459% ( 2) 00:07:33.209 11141.120 - 11191.532: 99.1559% ( 2) 00:07:33.209 11191.532 - 11241.945: 99.1660% ( 2) 00:07:33.209 11241.945 - 11292.357: 99.1811% ( 3) 00:07:33.209 11292.357 - 11342.769: 99.1961% ( 3) 00:07:33.209 11342.769 - 11393.182: 99.2112% ( 3) 00:07:33.209 11393.182 - 11443.594: 99.2213% ( 2) 00:07:33.209 11443.594 - 11494.006: 99.2363% ( 3) 00:07:33.209 11494.006 - 11544.418: 99.2464% ( 2) 00:07:33.209 11544.418 - 11594.831: 99.2615% ( 3) 00:07:33.209 11594.831 - 11645.243: 99.2765% ( 3) 00:07:33.209 11645.243 - 11695.655: 99.2866% ( 2) 00:07:33.209 11695.655 - 11746.068: 99.3016% ( 3) 00:07:33.209 11746.068 - 11796.480: 99.3167% ( 3) 00:07:33.209 11796.480 - 11846.892: 99.3268% ( 2) 00:07:33.209 11846.892 - 11897.305: 99.3418% ( 3) 00:07:33.209 11897.305 - 11947.717: 99.3569% ( 3) 00:07:33.209 12502.252 - 12552.665: 99.3619% ( 1) 00:07:33.209 12552.665 - 12603.077: 99.3770% ( 3) 00:07:33.209 12603.077 - 12653.489: 99.3921% ( 3) 00:07:33.209 12653.489 - 12703.902: 99.4072% ( 3) 00:07:33.209 12703.902 - 12754.314: 99.4323% ( 5) 00:07:33.209 12754.314 - 12804.726: 99.4473% ( 3) 00:07:33.209 12804.726 - 12855.138: 99.4624% ( 3) 00:07:33.209 12855.138 - 12905.551: 99.4775% ( 3) 00:07:33.209 12905.551 - 13006.375: 99.5076% ( 6) 00:07:33.209 13006.375 - 13107.200: 99.5428% ( 7) 00:07:33.209 13107.200 - 13208.025: 99.5780% ( 7) 00:07:33.209 13208.025 - 13308.849: 99.6081% ( 6) 00:07:33.209 13308.849 - 13409.674: 99.6433% ( 7) 00:07:33.209 13409.674 - 13510.498: 99.6734% ( 6) 00:07:33.209 13510.498 - 13611.323: 99.6785% ( 1) 00:07:33.209 16837.711 - 16938.535: 99.6986% ( 4) 00:07:33.209 16938.535 - 17039.360: 99.7287% ( 6) 00:07:33.209 17039.360 - 17140.185: 99.7588% ( 6) 00:07:33.209 17140.185 - 17241.009: 99.7940% ( 7) 00:07:33.209 17241.009 - 17341.834: 99.8242% ( 6) 00:07:33.209 17341.834 - 17442.658: 99.8593% ( 7) 00:07:33.210 17442.658 - 17543.483: 99.8945% ( 7) 00:07:33.210 17543.483 - 17644.308: 99.9246% ( 6) 00:07:33.210 17644.308 - 17745.132: 99.9498% ( 5) 00:07:33.210 17745.132 - 17845.957: 99.9799% ( 6) 00:07:33.210 17845.957 - 17946.782: 100.0000% ( 4) 00:07:33.210 00:07:33.210 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:33.210 ============================================================================== 00:07:33.210 Range in us Cumulative IO count 00:07:33.210 3201.182 - 3213.785: 0.0100% ( 2) 00:07:33.210 3213.785 - 3226.388: 0.0452% ( 7) 00:07:33.210 3226.388 - 3251.594: 0.0703% ( 5) 00:07:33.210 3251.594 - 3276.800: 0.0804% ( 2) 00:07:33.210 3276.800 - 3302.006: 0.0904% ( 2) 00:07:33.210 3302.006 - 3327.212: 0.1005% ( 2) 00:07:33.210 3327.212 - 3352.418: 0.1055% ( 1) 00:07:33.210 3352.418 - 3377.625: 0.1105% ( 1) 00:07:33.210 3377.625 - 3402.831: 0.1206% ( 2) 00:07:33.210 3402.831 - 3428.037: 0.1306% ( 2) 00:07:33.210 3428.037 - 3453.243: 0.1407% ( 2) 00:07:33.210 3453.243 - 3478.449: 0.1608% ( 4) 00:07:33.210 3478.449 - 3503.655: 0.1758% ( 3) 00:07:33.210 3503.655 - 3528.862: 0.1859% ( 2) 00:07:33.210 3528.862 - 3554.068: 0.2010% ( 3) 00:07:33.210 3554.068 - 3579.274: 0.2110% ( 2) 00:07:33.210 3579.274 - 3604.480: 0.2211% ( 2) 00:07:33.210 3604.480 - 3629.686: 0.2261% ( 1) 00:07:33.210 3629.686 - 3654.892: 0.2412% ( 3) 00:07:33.210 3654.892 - 3680.098: 0.2512% ( 2) 00:07:33.210 3680.098 - 3705.305: 0.2613% ( 2) 00:07:33.210 3705.305 - 3730.511: 0.2763% ( 3) 00:07:33.210 3730.511 - 3755.717: 0.2864% ( 2) 00:07:33.210 3755.717 - 3780.923: 0.3014% ( 3) 00:07:33.210 3780.923 - 3806.129: 0.3115% ( 2) 00:07:33.210 3806.129 - 3831.335: 0.3215% ( 2) 00:07:33.210 4789.169 - 4814.375: 0.3617% ( 8) 00:07:33.210 4814.375 - 4839.582: 0.3718% ( 2) 00:07:33.210 4839.582 - 4864.788: 0.3768% ( 1) 00:07:33.210 4864.788 - 4889.994: 0.3869% ( 2) 00:07:33.210 4889.994 - 4915.200: 0.3969% ( 2) 00:07:33.210 4915.200 - 4940.406: 0.4120% ( 3) 00:07:33.210 4940.406 - 4965.612: 0.4220% ( 2) 00:07:33.210 4965.612 - 4990.818: 0.4321% ( 2) 00:07:33.210 4990.818 - 5016.025: 0.4471% ( 3) 00:07:33.210 5016.025 - 5041.231: 0.4572% ( 2) 00:07:33.210 5041.231 - 5066.437: 0.4723% ( 3) 00:07:33.210 5066.437 - 5091.643: 0.4823% ( 2) 00:07:33.210 5091.643 - 5116.849: 0.4974% ( 3) 00:07:33.210 5116.849 - 5142.055: 0.5074% ( 2) 00:07:33.210 5142.055 - 5167.262: 0.5175% ( 2) 00:07:33.210 5167.262 - 5192.468: 0.5326% ( 3) 00:07:33.210 5192.468 - 5217.674: 0.5426% ( 2) 00:07:33.210 5217.674 - 5242.880: 0.5577% ( 3) 00:07:33.210 5242.880 - 5268.086: 0.5677% ( 2) 00:07:33.210 5268.086 - 5293.292: 0.5778% ( 2) 00:07:33.210 5293.292 - 5318.498: 0.5878% ( 2) 00:07:33.210 5318.498 - 5343.705: 0.5979% ( 2) 00:07:33.210 5343.705 - 5368.911: 0.6079% ( 2) 00:07:33.210 5368.911 - 5394.117: 0.6230% ( 3) 00:07:33.210 5394.117 - 5419.323: 0.6330% ( 2) 00:07:33.210 5419.323 - 5444.529: 0.6431% ( 2) 00:07:33.210 5595.766 - 5620.972: 0.6632% ( 4) 00:07:33.210 5620.972 - 5646.178: 0.6833% ( 4) 00:07:33.210 5646.178 - 5671.385: 0.7034% ( 4) 00:07:33.210 5671.385 - 5696.591: 0.8189% ( 23) 00:07:33.210 5696.591 - 5721.797: 1.1254% ( 61) 00:07:33.210 5721.797 - 5747.003: 1.6630% ( 107) 00:07:33.210 5747.003 - 5772.209: 2.3764% ( 142) 00:07:33.210 5772.209 - 5797.415: 3.5018% ( 224) 00:07:33.210 5797.415 - 5822.622: 5.0492% ( 308) 00:07:33.210 5822.622 - 5847.828: 6.9082% ( 370) 00:07:33.210 5847.828 - 5873.034: 9.0535% ( 427) 00:07:33.210 5873.034 - 5898.240: 11.4801% ( 483) 00:07:33.210 5898.240 - 5923.446: 14.1831% ( 538) 00:07:33.210 5923.446 - 5948.652: 16.8057% ( 522) 00:07:33.210 5948.652 - 5973.858: 19.2474% ( 486) 00:07:33.210 5973.858 - 5999.065: 21.8298% ( 514) 00:07:33.210 5999.065 - 6024.271: 24.4775% ( 527) 00:07:33.210 6024.271 - 6049.477: 27.0498% ( 512) 00:07:33.210 6049.477 - 6074.683: 29.7779% ( 543) 00:07:33.210 6074.683 - 6099.889: 32.4256% ( 527) 00:07:33.210 6099.889 - 6125.095: 35.1738% ( 547) 00:07:33.210 6125.095 - 6150.302: 37.8467% ( 532) 00:07:33.210 6150.302 - 6175.508: 40.4592% ( 520) 00:07:33.210 6175.508 - 6200.714: 43.1421% ( 534) 00:07:33.210 6200.714 - 6225.920: 45.7998% ( 529) 00:07:33.210 6225.920 - 6251.126: 48.4676% ( 531) 00:07:33.210 6251.126 - 6276.332: 51.1555% ( 535) 00:07:33.210 6276.332 - 6301.538: 53.8284% ( 532) 00:07:33.210 6301.538 - 6326.745: 56.4660% ( 525) 00:07:33.210 6326.745 - 6351.951: 59.1590% ( 536) 00:07:33.210 6351.951 - 6377.157: 61.7615% ( 518) 00:07:33.210 6377.157 - 6402.363: 64.4092% ( 527) 00:07:33.210 6402.363 - 6427.569: 67.0016% ( 516) 00:07:33.210 6427.569 - 6452.775: 69.6493% ( 527) 00:07:33.210 6452.775 - 6503.188: 74.8895% ( 1043) 00:07:33.210 6503.188 - 6553.600: 80.0794% ( 1033) 00:07:33.210 6553.600 - 6604.012: 84.5006% ( 880) 00:07:33.210 6604.012 - 6654.425: 87.8266% ( 662) 00:07:33.210 6654.425 - 6704.837: 89.8664% ( 406) 00:07:33.210 6704.837 - 6755.249: 90.8712% ( 200) 00:07:33.210 6755.249 - 6805.662: 91.4289% ( 111) 00:07:33.210 6805.662 - 6856.074: 91.8609% ( 86) 00:07:33.210 6856.074 - 6906.486: 92.1875% ( 65) 00:07:33.210 6906.486 - 6956.898: 92.4990% ( 62) 00:07:33.210 6956.898 - 7007.311: 92.7904% ( 58) 00:07:33.210 7007.311 - 7057.723: 93.0516% ( 52) 00:07:33.210 7057.723 - 7108.135: 93.2978% ( 49) 00:07:33.210 7108.135 - 7158.548: 93.5792% ( 56) 00:07:33.210 7158.548 - 7208.960: 93.8555% ( 55) 00:07:33.210 7208.960 - 7259.372: 94.0916% ( 47) 00:07:33.210 7259.372 - 7309.785: 94.3027% ( 42) 00:07:33.210 7309.785 - 7360.197: 94.4735% ( 34) 00:07:33.210 7360.197 - 7410.609: 94.6694% ( 39) 00:07:33.210 7410.609 - 7461.022: 94.8453% ( 35) 00:07:33.210 7461.022 - 7511.434: 95.0060% ( 32) 00:07:33.210 7511.434 - 7561.846: 95.2070% ( 40) 00:07:33.210 7561.846 - 7612.258: 95.3678% ( 32) 00:07:33.210 7612.258 - 7662.671: 95.5185% ( 30) 00:07:33.210 7662.671 - 7713.083: 95.6592% ( 28) 00:07:33.210 7713.083 - 7763.495: 95.7697% ( 22) 00:07:33.210 7763.495 - 7813.908: 95.8953% ( 25) 00:07:33.210 7813.908 - 7864.320: 96.0109% ( 23) 00:07:33.210 7864.320 - 7914.732: 96.1264% ( 23) 00:07:33.210 7914.732 - 7965.145: 96.2721% ( 29) 00:07:33.210 7965.145 - 8015.557: 96.3977% ( 25) 00:07:33.210 8015.557 - 8065.969: 96.5233% ( 25) 00:07:33.210 8065.969 - 8116.382: 96.6489% ( 25) 00:07:33.210 8116.382 - 8166.794: 96.7494% ( 20) 00:07:33.210 8166.794 - 8217.206: 96.8449% ( 19) 00:07:33.210 8217.206 - 8267.618: 96.9353% ( 18) 00:07:33.210 8267.618 - 8318.031: 97.0107% ( 15) 00:07:33.210 8318.031 - 8368.443: 97.0559% ( 9) 00:07:33.210 8368.443 - 8418.855: 97.1212% ( 13) 00:07:33.210 8418.855 - 8469.268: 97.1764% ( 11) 00:07:33.210 8469.268 - 8519.680: 97.2116% ( 7) 00:07:33.210 8519.680 - 8570.092: 97.2518% ( 8) 00:07:33.210 8570.092 - 8620.505: 97.3020% ( 10) 00:07:33.210 8620.505 - 8670.917: 97.3473% ( 9) 00:07:33.210 8670.917 - 8721.329: 97.3975% ( 10) 00:07:33.210 8721.329 - 8771.742: 97.4427% ( 9) 00:07:33.210 8771.742 - 8822.154: 97.4829% ( 8) 00:07:33.210 8822.154 - 8872.566: 97.5080% ( 5) 00:07:33.210 8872.566 - 8922.978: 97.5332% ( 5) 00:07:33.210 8922.978 - 8973.391: 97.5985% ( 13) 00:07:33.210 8973.391 - 9023.803: 97.6437% ( 9) 00:07:33.210 9023.803 - 9074.215: 97.6789% ( 7) 00:07:33.210 9074.215 - 9124.628: 97.6990% ( 4) 00:07:33.210 9124.628 - 9175.040: 97.7442% ( 9) 00:07:33.210 9175.040 - 9225.452: 97.7793% ( 7) 00:07:33.210 9225.452 - 9275.865: 97.8145% ( 7) 00:07:33.210 9275.865 - 9326.277: 97.8497% ( 7) 00:07:33.210 9326.277 - 9376.689: 97.8848% ( 7) 00:07:33.210 9376.689 - 9427.102: 97.9200% ( 7) 00:07:33.210 9427.102 - 9477.514: 97.9552% ( 7) 00:07:33.210 9477.514 - 9527.926: 97.9954% ( 8) 00:07:33.210 9527.926 - 9578.338: 98.0406% ( 9) 00:07:33.210 9578.338 - 9628.751: 98.1109% ( 14) 00:07:33.210 9628.751 - 9679.163: 98.1612% ( 10) 00:07:33.210 9679.163 - 9729.575: 98.2416% ( 16) 00:07:33.210 9729.575 - 9779.988: 98.3069% ( 13) 00:07:33.210 9779.988 - 9830.400: 98.3822% ( 15) 00:07:33.210 9830.400 - 9880.812: 98.4475% ( 13) 00:07:33.210 9880.812 - 9931.225: 98.5129% ( 13) 00:07:33.210 9931.225 - 9981.637: 98.5832% ( 14) 00:07:33.210 9981.637 - 10032.049: 98.6586% ( 15) 00:07:33.210 10032.049 - 10082.462: 98.7239% ( 13) 00:07:33.210 10082.462 - 10132.874: 98.7691% ( 9) 00:07:33.210 10132.874 - 10183.286: 98.8143% ( 9) 00:07:33.210 10183.286 - 10233.698: 98.8495% ( 7) 00:07:33.210 10233.698 - 10284.111: 98.8846% ( 7) 00:07:33.210 10284.111 - 10334.523: 98.9198% ( 7) 00:07:33.210 10334.523 - 10384.935: 98.9550% ( 7) 00:07:33.210 10384.935 - 10435.348: 99.0253% ( 14) 00:07:33.210 10435.348 - 10485.760: 99.0806% ( 11) 00:07:33.210 10485.760 - 10536.172: 99.0906% ( 2) 00:07:33.210 10536.172 - 10586.585: 99.1007% ( 2) 00:07:33.210 10586.585 - 10636.997: 99.1107% ( 2) 00:07:33.210 10687.409 - 10737.822: 99.1158% ( 1) 00:07:33.210 10737.822 - 10788.234: 99.1258% ( 2) 00:07:33.210 10788.234 - 10838.646: 99.1308% ( 1) 00:07:33.210 10838.646 - 10889.058: 99.1359% ( 1) 00:07:33.210 10889.058 - 10939.471: 99.1459% ( 2) 00:07:33.210 10939.471 - 10989.883: 99.1509% ( 1) 00:07:33.210 10989.883 - 11040.295: 99.1710% ( 4) 00:07:33.210 11040.295 - 11090.708: 99.1811% ( 2) 00:07:33.210 11090.708 - 11141.120: 99.1861% ( 1) 00:07:33.210 11141.120 - 11191.532: 99.1961% ( 2) 00:07:33.210 11191.532 - 11241.945: 99.2012% ( 1) 00:07:33.210 11241.945 - 11292.357: 99.2112% ( 2) 00:07:33.210 11292.357 - 11342.769: 99.2213% ( 2) 00:07:33.210 11342.769 - 11393.182: 99.2263% ( 1) 00:07:33.210 11393.182 - 11443.594: 99.2363% ( 2) 00:07:33.210 11443.594 - 11494.006: 99.2414% ( 1) 00:07:33.210 11494.006 - 11544.418: 99.2514% ( 2) 00:07:33.210 11544.418 - 11594.831: 99.2615% ( 2) 00:07:33.210 11594.831 - 11645.243: 99.2665% ( 1) 00:07:33.210 11645.243 - 11695.655: 99.2765% ( 2) 00:07:33.210 11695.655 - 11746.068: 99.2816% ( 1) 00:07:33.210 11746.068 - 11796.480: 99.2916% ( 2) 00:07:33.210 11796.480 - 11846.892: 99.3016% ( 2) 00:07:33.210 11846.892 - 11897.305: 99.3067% ( 1) 00:07:33.210 11897.305 - 11947.717: 99.3167% ( 2) 00:07:33.210 11947.717 - 11998.129: 99.3268% ( 2) 00:07:33.210 11998.129 - 12048.542: 99.3418% ( 3) 00:07:33.210 12048.542 - 12098.954: 99.3670% ( 5) 00:07:33.210 12098.954 - 12149.366: 99.3921% ( 5) 00:07:33.210 12149.366 - 12199.778: 99.4172% ( 5) 00:07:33.210 12199.778 - 12250.191: 99.4323% ( 3) 00:07:33.210 12250.191 - 12300.603: 99.4524% ( 4) 00:07:33.210 12300.603 - 12351.015: 99.4674% ( 3) 00:07:33.210 12351.015 - 12401.428: 99.4825% ( 3) 00:07:33.210 12401.428 - 12451.840: 99.4976% ( 3) 00:07:33.210 12451.840 - 12502.252: 99.5127% ( 3) 00:07:33.210 12502.252 - 12552.665: 99.5328% ( 4) 00:07:33.211 12552.665 - 12603.077: 99.5478% ( 3) 00:07:33.211 12603.077 - 12653.489: 99.5629% ( 3) 00:07:33.211 12653.489 - 12703.902: 99.5780% ( 3) 00:07:33.211 12703.902 - 12754.314: 99.5981% ( 4) 00:07:33.211 12754.314 - 12804.726: 99.6131% ( 3) 00:07:33.211 12804.726 - 12855.138: 99.6282% ( 3) 00:07:33.211 12855.138 - 12905.551: 99.6483% ( 4) 00:07:33.211 12905.551 - 13006.375: 99.6785% ( 6) 00:07:33.211 16232.763 - 16333.588: 99.6986% ( 4) 00:07:33.211 16333.588 - 16434.412: 99.7237% ( 5) 00:07:33.211 16434.412 - 16535.237: 99.7538% ( 6) 00:07:33.211 16535.237 - 16636.062: 99.7789% ( 5) 00:07:33.211 16636.062 - 16736.886: 99.8091% ( 6) 00:07:33.211 16736.886 - 16837.711: 99.8392% ( 6) 00:07:33.211 16837.711 - 16938.535: 99.8694% ( 6) 00:07:33.211 16938.535 - 17039.360: 99.8995% ( 6) 00:07:33.211 17039.360 - 17140.185: 99.9297% ( 6) 00:07:33.211 17140.185 - 17241.009: 99.9598% ( 6) 00:07:33.211 17241.009 - 17341.834: 99.9900% ( 6) 00:07:33.211 17341.834 - 17442.658: 100.0000% ( 2) 00:07:33.211 00:07:33.211 23:20:18 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:34.146 Initializing NVMe Controllers 00:07:34.146 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:34.146 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:34.146 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:34.146 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:34.146 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:34.146 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:34.146 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:34.146 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:34.146 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:34.146 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:34.146 Initialization complete. Launching workers. 00:07:34.146 ======================================================== 00:07:34.146 Latency(us) 00:07:34.146 Device Information : IOPS MiB/s Average min max 00:07:34.146 PCIE (0000:00:10.0) NSID 1 from core 0: 15358.11 179.98 8337.38 5652.88 26485.20 00:07:34.146 PCIE (0000:00:11.0) NSID 1 from core 0: 15358.11 179.98 8331.01 5729.21 26289.41 00:07:34.146 PCIE (0000:00:13.0) NSID 1 from core 0: 15358.11 179.98 8324.35 4942.93 26029.23 00:07:34.146 PCIE (0000:00:12.0) NSID 1 from core 0: 15358.11 179.98 8317.29 4528.58 26051.45 00:07:34.146 PCIE (0000:00:12.0) NSID 2 from core 0: 15358.11 179.98 8310.28 4017.36 26588.00 00:07:34.146 PCIE (0000:00:12.0) NSID 3 from core 0: 15358.11 179.98 8303.48 3718.12 26490.24 00:07:34.146 ======================================================== 00:07:34.146 Total : 92148.68 1079.87 8320.63 3718.12 26588.00 00:07:34.146 00:07:34.146 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.146 ================================================================================= 00:07:34.146 1.00000% : 6024.271us 00:07:34.146 10.00000% : 6402.363us 00:07:34.146 25.00000% : 6553.600us 00:07:34.146 50.00000% : 7007.311us 00:07:34.146 75.00000% : 8267.618us 00:07:34.146 90.00000% : 14014.622us 00:07:34.146 95.00000% : 14619.569us 00:07:34.146 98.00000% : 15325.342us 00:07:34.146 99.00000% : 16131.938us 00:07:34.146 99.50000% : 17140.185us 00:07:34.146 99.90000% : 26214.400us 00:07:34.146 99.99000% : 26617.698us 00:07:34.146 99.99900% : 26617.698us 00:07:34.146 99.99990% : 26617.698us 00:07:34.146 99.99999% : 26617.698us 00:07:34.146 00:07:34.146 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.146 ================================================================================= 00:07:34.146 1.00000% : 6099.889us 00:07:34.146 10.00000% : 6503.188us 00:07:34.146 25.00000% : 6654.425us 00:07:34.146 50.00000% : 6856.074us 00:07:34.146 75.00000% : 8217.206us 00:07:34.146 90.00000% : 14115.446us 00:07:34.146 95.00000% : 14417.920us 00:07:34.146 98.00000% : 14821.218us 00:07:34.146 99.00000% : 15829.465us 00:07:34.146 99.50000% : 17644.308us 00:07:34.146 99.90000% : 26012.751us 00:07:34.146 99.99000% : 26416.049us 00:07:34.146 99.99900% : 26416.049us 00:07:34.146 99.99990% : 26416.049us 00:07:34.146 99.99999% : 26416.049us 00:07:34.146 00:07:34.146 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.146 ================================================================================= 00:07:34.146 1.00000% : 6099.889us 00:07:34.146 10.00000% : 6503.188us 00:07:34.146 25.00000% : 6654.425us 00:07:34.146 50.00000% : 6856.074us 00:07:34.146 75.00000% : 8217.206us 00:07:34.146 90.00000% : 14115.446us 00:07:34.146 95.00000% : 14417.920us 00:07:34.146 98.00000% : 14720.394us 00:07:34.146 99.00000% : 15426.166us 00:07:34.146 99.50000% : 18753.378us 00:07:34.146 99.90000% : 25811.102us 00:07:34.146 99.99000% : 26012.751us 00:07:34.146 99.99900% : 26214.400us 00:07:34.146 99.99990% : 26214.400us 00:07:34.146 99.99999% : 26214.400us 00:07:34.146 00:07:34.146 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.146 ================================================================================= 00:07:34.146 1.00000% : 6099.889us 00:07:34.146 10.00000% : 6503.188us 00:07:34.146 25.00000% : 6654.425us 00:07:34.146 50.00000% : 6856.074us 00:07:34.146 75.00000% : 8217.206us 00:07:34.146 90.00000% : 14216.271us 00:07:34.146 95.00000% : 14417.920us 00:07:34.146 98.00000% : 14720.394us 00:07:34.146 99.00000% : 15426.166us 00:07:34.146 99.50000% : 18854.203us 00:07:34.146 99.90000% : 25811.102us 00:07:34.146 99.99000% : 26214.400us 00:07:34.146 99.99900% : 26214.400us 00:07:34.146 99.99990% : 26214.400us 00:07:34.146 99.99999% : 26214.400us 00:07:34.146 00:07:34.146 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.146 ================================================================================= 00:07:34.146 1.00000% : 6074.683us 00:07:34.146 10.00000% : 6503.188us 00:07:34.146 25.00000% : 6654.425us 00:07:34.146 50.00000% : 6856.074us 00:07:34.146 75.00000% : 8217.206us 00:07:34.146 90.00000% : 14115.446us 00:07:34.146 95.00000% : 14417.920us 00:07:34.146 98.00000% : 14821.218us 00:07:34.146 99.00000% : 15426.166us 00:07:34.146 99.50000% : 18652.554us 00:07:34.146 99.90000% : 26416.049us 00:07:34.146 99.99000% : 26617.698us 00:07:34.146 99.99900% : 26617.698us 00:07:34.146 99.99990% : 26617.698us 00:07:34.146 99.99999% : 26617.698us 00:07:34.146 00:07:34.146 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.146 ================================================================================= 00:07:34.146 1.00000% : 6099.889us 00:07:34.146 10.00000% : 6503.188us 00:07:34.146 25.00000% : 6654.425us 00:07:34.146 50.00000% : 6856.074us 00:07:34.146 75.00000% : 8217.206us 00:07:34.146 90.00000% : 14115.446us 00:07:34.146 95.00000% : 14417.920us 00:07:34.146 98.00000% : 14821.218us 00:07:34.146 99.00000% : 15426.166us 00:07:34.146 99.50000% : 18753.378us 00:07:34.146 99.90000% : 26012.751us 00:07:34.146 99.99000% : 26617.698us 00:07:34.146 99.99900% : 26617.698us 00:07:34.146 99.99990% : 26617.698us 00:07:34.146 99.99999% : 26617.698us 00:07:34.146 00:07:34.146 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:34.146 ============================================================================== 00:07:34.146 Range in us Cumulative IO count 00:07:34.146 5646.178 - 5671.385: 0.0065% ( 1) 00:07:34.146 5721.797 - 5747.003: 0.0130% ( 1) 00:07:34.146 5772.209 - 5797.415: 0.0195% ( 1) 00:07:34.146 5797.415 - 5822.622: 0.0389% ( 3) 00:07:34.146 5847.828 - 5873.034: 0.1686% ( 20) 00:07:34.146 5873.034 - 5898.240: 0.3436% ( 27) 00:07:34.146 5898.240 - 5923.446: 0.5316% ( 29) 00:07:34.146 5923.446 - 5948.652: 0.6159% ( 13) 00:07:34.146 5948.652 - 5973.858: 0.7586% ( 22) 00:07:34.146 5973.858 - 5999.065: 0.9466% ( 29) 00:07:34.146 5999.065 - 6024.271: 1.1540% ( 32) 00:07:34.146 6024.271 - 6049.477: 1.4847% ( 51) 00:07:34.146 6049.477 - 6074.683: 1.8607% ( 58) 00:07:34.146 6074.683 - 6099.889: 2.3729% ( 79) 00:07:34.146 6099.889 - 6125.095: 2.6776% ( 47) 00:07:34.146 6125.095 - 6150.302: 2.9499% ( 42) 00:07:34.146 6150.302 - 6175.508: 3.1574% ( 32) 00:07:34.146 6175.508 - 6200.714: 3.6048% ( 69) 00:07:34.146 6200.714 - 6225.920: 3.9873% ( 59) 00:07:34.146 6225.920 - 6251.126: 4.4152% ( 66) 00:07:34.146 6251.126 - 6276.332: 4.9857% ( 88) 00:07:34.146 6276.332 - 6301.538: 5.6406% ( 101) 00:07:34.146 6301.538 - 6326.745: 6.6325% ( 153) 00:07:34.146 6326.745 - 6351.951: 7.7152% ( 167) 00:07:34.146 6351.951 - 6377.157: 9.1675% ( 224) 00:07:34.146 6377.157 - 6402.363: 10.8143% ( 254) 00:07:34.146 6402.363 - 6427.569: 12.8112% ( 308) 00:07:34.146 6427.569 - 6452.775: 15.1712% ( 364) 00:07:34.146 6452.775 - 6503.188: 19.9300% ( 734) 00:07:34.146 6503.188 - 6553.600: 25.1556% ( 806) 00:07:34.146 6553.600 - 6604.012: 29.3633% ( 649) 00:07:34.146 6604.012 - 6654.425: 33.4608% ( 632) 00:07:34.146 6654.425 - 6704.837: 36.7285% ( 504) 00:07:34.146 6704.837 - 6755.249: 39.6330% ( 448) 00:07:34.146 6755.249 - 6805.662: 42.1875% ( 394) 00:07:34.146 6805.662 - 6856.074: 44.4826% ( 354) 00:07:34.146 6856.074 - 6906.486: 46.7518% ( 350) 00:07:34.146 6906.486 - 6956.898: 48.9497% ( 339) 00:07:34.146 6956.898 - 7007.311: 51.3226% ( 366) 00:07:34.146 7007.311 - 7057.723: 53.5918% ( 350) 00:07:34.146 7057.723 - 7108.135: 55.8610% ( 350) 00:07:34.146 7108.135 - 7158.548: 57.8384% ( 305) 00:07:34.146 7158.548 - 7208.960: 59.4074% ( 242) 00:07:34.146 7208.960 - 7259.372: 60.7106% ( 201) 00:07:34.146 7259.372 - 7309.785: 61.6831% ( 150) 00:07:34.146 7309.785 - 7360.197: 62.7204% ( 160) 00:07:34.146 7360.197 - 7410.609: 63.8421% ( 173) 00:07:34.146 7410.609 - 7461.022: 64.8405% ( 154) 00:07:34.146 7461.022 - 7511.434: 65.6574% ( 126) 00:07:34.146 7511.434 - 7561.846: 66.3317% ( 104) 00:07:34.146 7561.846 - 7612.258: 67.0319% ( 108) 00:07:34.146 7612.258 - 7662.671: 67.7256% ( 107) 00:07:34.146 7662.671 - 7713.083: 68.2508% ( 81) 00:07:34.146 7713.083 - 7763.495: 68.9315% ( 105) 00:07:34.147 7763.495 - 7813.908: 69.8133% ( 136) 00:07:34.147 7813.908 - 7864.320: 70.6043% ( 122) 00:07:34.147 7864.320 - 7914.732: 71.2850% ( 105) 00:07:34.147 7914.732 - 7965.145: 71.7842% ( 77) 00:07:34.147 7965.145 - 8015.557: 72.5947% ( 125) 00:07:34.147 8015.557 - 8065.969: 73.2560% ( 102) 00:07:34.147 8065.969 - 8116.382: 73.8395% ( 90) 00:07:34.147 8116.382 - 8166.794: 74.4424% ( 93) 00:07:34.147 8166.794 - 8217.206: 74.9611% ( 80) 00:07:34.147 8217.206 - 8267.618: 75.5641% ( 93) 00:07:34.147 8267.618 - 8318.031: 76.1476% ( 90) 00:07:34.147 8318.031 - 8368.443: 76.6792% ( 82) 00:07:34.147 8368.443 - 8418.855: 77.1979% ( 80) 00:07:34.147 8418.855 - 8469.268: 77.6387% ( 68) 00:07:34.147 8469.268 - 8519.680: 77.9629% ( 50) 00:07:34.147 8519.680 - 8570.092: 78.3195% ( 55) 00:07:34.147 8570.092 - 8620.505: 78.6177% ( 46) 00:07:34.147 8620.505 - 8670.917: 78.9225% ( 47) 00:07:34.147 8670.917 - 8721.329: 79.1688% ( 38) 00:07:34.147 8721.329 - 8771.742: 79.3893% ( 34) 00:07:34.147 8771.742 - 8822.154: 79.5513% ( 25) 00:07:34.147 8822.154 - 8872.566: 79.6616% ( 17) 00:07:34.147 8872.566 - 8922.978: 79.7523% ( 14) 00:07:34.147 8922.978 - 8973.391: 79.8626% ( 17) 00:07:34.147 8973.391 - 9023.803: 79.9598% ( 15) 00:07:34.147 9023.803 - 9074.215: 80.0052% ( 7) 00:07:34.147 9074.215 - 9124.628: 80.0311% ( 4) 00:07:34.147 9124.628 - 9175.040: 80.0635% ( 5) 00:07:34.147 9175.040 - 9225.452: 80.0765% ( 2) 00:07:34.147 9225.452 - 9275.865: 80.1349% ( 9) 00:07:34.147 9275.865 - 9326.277: 80.2710% ( 21) 00:07:34.147 9326.277 - 9376.689: 80.3683% ( 15) 00:07:34.147 9376.689 - 9427.102: 80.4266% ( 9) 00:07:34.147 9427.102 - 9477.514: 80.4655% ( 6) 00:07:34.147 9477.514 - 9527.926: 80.6859% ( 34) 00:07:34.147 9527.926 - 9578.338: 80.7184% ( 5) 00:07:34.147 9578.338 - 9628.751: 80.7702% ( 8) 00:07:34.147 9628.751 - 9679.163: 80.8026% ( 5) 00:07:34.147 9679.163 - 9729.575: 80.8351% ( 5) 00:07:34.147 9729.575 - 9779.988: 80.8804% ( 7) 00:07:34.147 9779.988 - 9830.400: 80.9388% ( 9) 00:07:34.147 9830.400 - 9880.812: 81.0490% ( 17) 00:07:34.147 9880.812 - 9931.225: 81.1333% ( 13) 00:07:34.147 9931.225 - 9981.637: 81.2500% ( 18) 00:07:34.147 9981.637 - 10032.049: 81.3019% ( 8) 00:07:34.147 10032.049 - 10082.462: 81.3537% ( 8) 00:07:34.147 10082.462 - 10132.874: 81.4186% ( 10) 00:07:34.147 10132.874 - 10183.286: 81.4834% ( 10) 00:07:34.147 10183.286 - 10233.698: 81.5418% ( 9) 00:07:34.147 10233.698 - 10284.111: 81.6001% ( 9) 00:07:34.147 10284.111 - 10334.523: 81.6779% ( 12) 00:07:34.147 10334.523 - 10384.935: 81.7687% ( 14) 00:07:34.147 10384.935 - 10435.348: 81.8270% ( 9) 00:07:34.147 10435.348 - 10485.760: 81.8854% ( 9) 00:07:34.147 10485.760 - 10536.172: 81.9437% ( 9) 00:07:34.147 10536.172 - 10586.585: 82.0150% ( 11) 00:07:34.147 10586.585 - 10636.997: 82.0604% ( 7) 00:07:34.147 10636.997 - 10687.409: 82.1123% ( 8) 00:07:34.147 10687.409 - 10737.822: 82.1642% ( 8) 00:07:34.147 10737.822 - 10788.234: 82.2160% ( 8) 00:07:34.147 10788.234 - 10838.646: 82.2614% ( 7) 00:07:34.147 10838.646 - 10889.058: 82.3003% ( 6) 00:07:34.147 10889.058 - 10939.471: 82.3392% ( 6) 00:07:34.147 10939.471 - 10989.883: 82.3716% ( 5) 00:07:34.147 10989.883 - 11040.295: 82.3976% ( 4) 00:07:34.147 11090.708 - 11141.120: 82.4170% ( 3) 00:07:34.147 11141.120 - 11191.532: 82.4235% ( 1) 00:07:34.147 11292.357 - 11342.769: 82.4365% ( 2) 00:07:34.147 11393.182 - 11443.594: 82.4494% ( 2) 00:07:34.147 11443.594 - 11494.006: 82.4624% ( 2) 00:07:34.147 11494.006 - 11544.418: 82.4883% ( 4) 00:07:34.147 11544.418 - 11594.831: 82.5013% ( 2) 00:07:34.147 11594.831 - 11645.243: 82.5078% ( 1) 00:07:34.147 11645.243 - 11695.655: 82.5272% ( 3) 00:07:34.147 11695.655 - 11746.068: 82.5337% ( 1) 00:07:34.147 11746.068 - 11796.480: 82.5402% ( 1) 00:07:34.147 11796.480 - 11846.892: 82.5532% ( 2) 00:07:34.147 11846.892 - 11897.305: 82.5596% ( 1) 00:07:34.147 11897.305 - 11947.717: 82.5985% ( 6) 00:07:34.147 11947.717 - 11998.129: 82.6245% ( 4) 00:07:34.147 11998.129 - 12048.542: 82.7217% ( 15) 00:07:34.147 12048.542 - 12098.954: 82.8060% ( 13) 00:07:34.147 12098.954 - 12149.366: 82.8255% ( 3) 00:07:34.147 12149.366 - 12199.778: 82.8384% ( 2) 00:07:34.147 12199.778 - 12250.191: 82.8644% ( 4) 00:07:34.147 12250.191 - 12300.603: 82.8838% ( 3) 00:07:34.147 12300.603 - 12351.015: 82.9033% ( 3) 00:07:34.147 12351.015 - 12401.428: 82.9551% ( 8) 00:07:34.147 12401.428 - 12451.840: 83.0265% ( 11) 00:07:34.147 12451.840 - 12502.252: 83.0978% ( 11) 00:07:34.147 12502.252 - 12552.665: 83.1367% ( 6) 00:07:34.147 12552.665 - 12603.077: 83.2858% ( 23) 00:07:34.147 12603.077 - 12653.489: 83.3182% ( 5) 00:07:34.147 12653.489 - 12703.902: 83.3377% ( 3) 00:07:34.147 12703.902 - 12754.314: 83.3636% ( 4) 00:07:34.147 12754.314 - 12804.726: 83.3895% ( 4) 00:07:34.147 12804.726 - 12855.138: 83.4284% ( 6) 00:07:34.147 12855.138 - 12905.551: 83.4673% ( 6) 00:07:34.147 12905.551 - 13006.375: 83.5386% ( 11) 00:07:34.147 13006.375 - 13107.200: 83.6618% ( 19) 00:07:34.147 13107.200 - 13208.025: 83.8239% ( 25) 00:07:34.147 13208.025 - 13308.849: 84.0638% ( 37) 00:07:34.147 13308.849 - 13409.674: 84.5890% ( 81) 00:07:34.147 13409.674 - 13510.498: 85.3216% ( 113) 00:07:34.147 13510.498 - 13611.323: 86.0866% ( 118) 00:07:34.147 13611.323 - 13712.148: 86.9813% ( 138) 00:07:34.147 13712.148 - 13812.972: 88.2002% ( 188) 00:07:34.147 13812.972 - 13913.797: 89.6979% ( 231) 00:07:34.147 13913.797 - 14014.622: 91.2669% ( 242) 00:07:34.147 14014.622 - 14115.446: 92.3431% ( 166) 00:07:34.147 14115.446 - 14216.271: 93.0239% ( 105) 00:07:34.147 14216.271 - 14317.095: 93.7111% ( 106) 00:07:34.147 14317.095 - 14417.920: 94.2946% ( 90) 00:07:34.147 14417.920 - 14518.745: 94.8198% ( 81) 00:07:34.147 14518.745 - 14619.569: 95.3903% ( 88) 00:07:34.147 14619.569 - 14720.394: 95.9933% ( 93) 00:07:34.147 14720.394 - 14821.218: 96.4082% ( 64) 00:07:34.147 14821.218 - 14922.043: 96.7388% ( 51) 00:07:34.147 14922.043 - 15022.868: 97.1862% ( 69) 00:07:34.147 15022.868 - 15123.692: 97.5169% ( 51) 00:07:34.147 15123.692 - 15224.517: 97.8734% ( 55) 00:07:34.147 15224.517 - 15325.342: 98.2235% ( 54) 00:07:34.147 15325.342 - 15426.166: 98.5348% ( 48) 00:07:34.147 15426.166 - 15526.991: 98.6774% ( 22) 00:07:34.147 15526.991 - 15627.815: 98.7682% ( 14) 00:07:34.147 15627.815 - 15728.640: 98.8719% ( 16) 00:07:34.147 15728.640 - 15829.465: 98.9173% ( 7) 00:07:34.147 15829.465 - 15930.289: 98.9497% ( 5) 00:07:34.147 15930.289 - 16031.114: 98.9951% ( 7) 00:07:34.147 16031.114 - 16131.938: 99.0534% ( 9) 00:07:34.147 16131.938 - 16232.763: 99.1377% ( 13) 00:07:34.147 16232.763 - 16333.588: 99.1636% ( 4) 00:07:34.147 16333.588 - 16434.412: 99.2285% ( 10) 00:07:34.147 16434.412 - 16535.237: 99.2803% ( 8) 00:07:34.147 16535.237 - 16636.062: 99.3387% ( 9) 00:07:34.147 16636.062 - 16736.886: 99.3906% ( 8) 00:07:34.147 16736.886 - 16837.711: 99.4359% ( 7) 00:07:34.147 16837.711 - 16938.535: 99.4619% ( 4) 00:07:34.147 16938.535 - 17039.360: 99.4878% ( 4) 00:07:34.147 17039.360 - 17140.185: 99.5073% ( 3) 00:07:34.147 17140.185 - 17241.009: 99.5137% ( 1) 00:07:34.147 17241.009 - 17341.834: 99.5202% ( 1) 00:07:34.147 17341.834 - 17442.658: 99.5332% ( 2) 00:07:34.147 17442.658 - 17543.483: 99.5462% ( 2) 00:07:34.147 17543.483 - 17644.308: 99.5526% ( 1) 00:07:34.147 17644.308 - 17745.132: 99.5721% ( 3) 00:07:34.147 17745.132 - 17845.957: 99.5851% ( 2) 00:07:34.147 24500.382 - 24601.206: 99.5980% ( 2) 00:07:34.147 24601.206 - 24702.031: 99.6175% ( 3) 00:07:34.147 24702.031 - 24802.855: 99.6369% ( 3) 00:07:34.147 24802.855 - 24903.680: 99.6564% ( 3) 00:07:34.147 24903.680 - 25004.505: 99.6823% ( 4) 00:07:34.147 25004.505 - 25105.329: 99.7082% ( 4) 00:07:34.147 25105.329 - 25206.154: 99.7212% ( 2) 00:07:34.147 25206.154 - 25306.978: 99.7471% ( 4) 00:07:34.147 25306.978 - 25407.803: 99.7666% ( 3) 00:07:34.147 25407.803 - 25508.628: 99.7860% ( 3) 00:07:34.147 25508.628 - 25609.452: 99.8120% ( 4) 00:07:34.147 25609.452 - 25710.277: 99.8314% ( 3) 00:07:34.147 25710.277 - 25811.102: 99.8509% ( 3) 00:07:34.147 25811.102 - 26012.751: 99.8963% ( 7) 00:07:34.147 26012.751 - 26214.400: 99.9416% ( 7) 00:07:34.147 26214.400 - 26416.049: 99.9870% ( 7) 00:07:34.147 26416.049 - 26617.698: 100.0000% ( 2) 00:07:34.147 00:07:34.147 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:34.147 ============================================================================== 00:07:34.147 Range in us Cumulative IO count 00:07:34.147 5721.797 - 5747.003: 0.0065% ( 1) 00:07:34.147 5747.003 - 5772.209: 0.0130% ( 1) 00:07:34.147 5822.622 - 5847.828: 0.0389% ( 4) 00:07:34.147 5847.828 - 5873.034: 0.0778% ( 6) 00:07:34.148 5873.034 - 5898.240: 0.1556% ( 12) 00:07:34.148 5898.240 - 5923.446: 0.2204% ( 10) 00:07:34.148 5923.446 - 5948.652: 0.2918% ( 11) 00:07:34.148 5948.652 - 5973.858: 0.3631% ( 11) 00:07:34.148 5973.858 - 5999.065: 0.4603% ( 15) 00:07:34.148 5999.065 - 6024.271: 0.5770% ( 18) 00:07:34.148 6024.271 - 6049.477: 0.7197% ( 22) 00:07:34.148 6049.477 - 6074.683: 0.9595% ( 37) 00:07:34.148 6074.683 - 6099.889: 1.2837% ( 50) 00:07:34.148 6099.889 - 6125.095: 1.5171% ( 36) 00:07:34.148 6125.095 - 6150.302: 1.9515% ( 67) 00:07:34.148 6150.302 - 6175.508: 2.4507% ( 77) 00:07:34.148 6175.508 - 6200.714: 2.8981% ( 69) 00:07:34.148 6200.714 - 6225.920: 3.2093% ( 48) 00:07:34.148 6225.920 - 6251.126: 3.4816% ( 42) 00:07:34.148 6251.126 - 6276.332: 3.8965% ( 64) 00:07:34.148 6276.332 - 6301.538: 4.3179% ( 65) 00:07:34.148 6301.538 - 6326.745: 4.9663% ( 100) 00:07:34.148 6326.745 - 6351.951: 5.5368% ( 88) 00:07:34.148 6351.951 - 6377.157: 6.1138% ( 89) 00:07:34.148 6377.157 - 6402.363: 6.9826% ( 134) 00:07:34.148 6402.363 - 6427.569: 8.0913% ( 171) 00:07:34.148 6427.569 - 6452.775: 9.1351% ( 161) 00:07:34.148 6452.775 - 6503.188: 11.7674% ( 406) 00:07:34.148 6503.188 - 6553.600: 16.4938% ( 729) 00:07:34.148 6553.600 - 6604.012: 22.5882% ( 940) 00:07:34.148 6604.012 - 6654.425: 29.8690% ( 1123) 00:07:34.148 6654.425 - 6704.837: 36.8970% ( 1084) 00:07:34.148 6704.837 - 6755.249: 43.0563% ( 950) 00:07:34.148 6755.249 - 6805.662: 46.9398% ( 599) 00:07:34.148 6805.662 - 6856.074: 50.4733% ( 545) 00:07:34.148 6856.074 - 6906.486: 53.0472% ( 397) 00:07:34.148 6906.486 - 6956.898: 55.0117% ( 303) 00:07:34.148 6956.898 - 7007.311: 56.1333% ( 173) 00:07:34.148 7007.311 - 7057.723: 57.2355% ( 170) 00:07:34.148 7057.723 - 7108.135: 58.3182% ( 167) 00:07:34.148 7108.135 - 7158.548: 59.2518% ( 144) 00:07:34.148 7158.548 - 7208.960: 60.0882% ( 129) 00:07:34.148 7208.960 - 7259.372: 61.0931% ( 155) 00:07:34.148 7259.372 - 7309.785: 62.0526% ( 148) 00:07:34.148 7309.785 - 7360.197: 62.8242% ( 119) 00:07:34.148 7360.197 - 7410.609: 63.4855% ( 102) 00:07:34.148 7410.609 - 7461.022: 64.4710% ( 152) 00:07:34.148 7461.022 - 7511.434: 65.8779% ( 217) 00:07:34.148 7511.434 - 7561.846: 66.5781% ( 108) 00:07:34.148 7561.846 - 7612.258: 67.4728% ( 138) 00:07:34.148 7612.258 - 7662.671: 68.5425% ( 165) 00:07:34.148 7662.671 - 7713.083: 69.4243% ( 136) 00:07:34.148 7713.083 - 7763.495: 70.0596% ( 98) 00:07:34.148 7763.495 - 7813.908: 71.0127% ( 147) 00:07:34.148 7813.908 - 7864.320: 71.5573% ( 84) 00:07:34.148 7864.320 - 7914.732: 72.0306% ( 73) 00:07:34.148 7914.732 - 7965.145: 72.5622% ( 82) 00:07:34.148 7965.145 - 8015.557: 73.0226% ( 71) 00:07:34.148 8015.557 - 8065.969: 73.4959% ( 73) 00:07:34.148 8065.969 - 8116.382: 74.0340% ( 83) 00:07:34.148 8116.382 - 8166.794: 74.7407% ( 109) 00:07:34.148 8166.794 - 8217.206: 75.2788% ( 83) 00:07:34.148 8217.206 - 8267.618: 75.8364% ( 86) 00:07:34.148 8267.618 - 8318.031: 76.3939% ( 86) 00:07:34.148 8318.031 - 8368.443: 76.8218% ( 66) 00:07:34.148 8368.443 - 8418.855: 77.5156% ( 107) 00:07:34.148 8418.855 - 8469.268: 77.9240% ( 63) 00:07:34.148 8469.268 - 8519.680: 78.2287% ( 47) 00:07:34.148 8519.680 - 8570.092: 78.6891% ( 71) 00:07:34.148 8570.092 - 8620.505: 79.0586% ( 57) 00:07:34.148 8620.505 - 8670.917: 79.2596% ( 31) 00:07:34.148 8670.917 - 8721.329: 79.3957% ( 21) 00:07:34.148 8721.329 - 8771.742: 79.5708% ( 27) 00:07:34.148 8771.742 - 8822.154: 79.7394% ( 26) 00:07:34.148 8822.154 - 8872.566: 79.8820% ( 22) 00:07:34.148 8872.566 - 8922.978: 79.9209% ( 6) 00:07:34.148 8922.978 - 8973.391: 79.9793% ( 9) 00:07:34.148 8973.391 - 9023.803: 80.0441% ( 10) 00:07:34.148 9023.803 - 9074.215: 80.0960% ( 8) 00:07:34.148 9074.215 - 9124.628: 80.1349% ( 6) 00:07:34.148 9124.628 - 9175.040: 80.1608% ( 4) 00:07:34.148 9175.040 - 9225.452: 80.2256% ( 10) 00:07:34.148 9225.452 - 9275.865: 80.3229% ( 15) 00:07:34.148 9275.865 - 9326.277: 80.3683% ( 7) 00:07:34.148 9326.277 - 9376.689: 80.4396% ( 11) 00:07:34.148 9376.689 - 9427.102: 80.5109% ( 11) 00:07:34.148 9427.102 - 9477.514: 80.5887% ( 12) 00:07:34.148 9477.514 - 9527.926: 80.6665% ( 12) 00:07:34.148 9527.926 - 9578.338: 80.7119% ( 7) 00:07:34.148 9578.338 - 9628.751: 80.7378% ( 4) 00:07:34.148 9628.751 - 9679.163: 80.7573% ( 3) 00:07:34.148 9679.163 - 9729.575: 80.8026% ( 7) 00:07:34.148 9729.575 - 9779.988: 80.8610% ( 9) 00:07:34.148 9779.988 - 9830.400: 80.9129% ( 8) 00:07:34.148 9830.400 - 9880.812: 81.0944% ( 28) 00:07:34.148 9880.812 - 9931.225: 81.2111% ( 18) 00:07:34.148 9931.225 - 9981.637: 81.2565% ( 7) 00:07:34.148 9981.637 - 10032.049: 81.3473% ( 14) 00:07:34.148 10032.049 - 10082.462: 81.4380% ( 14) 00:07:34.148 10082.462 - 10132.874: 81.5547% ( 18) 00:07:34.148 10132.874 - 10183.286: 81.7103% ( 24) 00:07:34.148 10183.286 - 10233.698: 81.7622% ( 8) 00:07:34.148 10233.698 - 10284.111: 81.8335% ( 11) 00:07:34.148 10284.111 - 10334.523: 81.9113% ( 12) 00:07:34.148 10334.523 - 10384.935: 81.9956% ( 13) 00:07:34.148 10384.935 - 10435.348: 82.0410% ( 7) 00:07:34.148 10435.348 - 10485.760: 82.0539% ( 2) 00:07:34.148 10485.760 - 10536.172: 82.0669% ( 2) 00:07:34.148 10536.172 - 10586.585: 82.0799% ( 2) 00:07:34.148 10586.585 - 10636.997: 82.0928% ( 2) 00:07:34.148 10636.997 - 10687.409: 82.1058% ( 2) 00:07:34.148 10687.409 - 10737.822: 82.1253% ( 3) 00:07:34.148 10737.822 - 10788.234: 82.1577% ( 5) 00:07:34.148 10788.234 - 10838.646: 82.1836% ( 4) 00:07:34.148 10838.646 - 10889.058: 82.2095% ( 4) 00:07:34.148 10889.058 - 10939.471: 82.2355% ( 4) 00:07:34.148 10939.471 - 10989.883: 82.2549% ( 3) 00:07:34.148 10989.883 - 11040.295: 82.2679% ( 2) 00:07:34.148 11040.295 - 11090.708: 82.2873% ( 3) 00:07:34.148 11090.708 - 11141.120: 82.2938% ( 1) 00:07:34.148 11141.120 - 11191.532: 82.3133% ( 3) 00:07:34.148 11191.532 - 11241.945: 82.3522% ( 6) 00:07:34.148 11241.945 - 11292.357: 82.4170% ( 10) 00:07:34.148 11292.357 - 11342.769: 82.4883% ( 11) 00:07:34.148 11342.769 - 11393.182: 82.5402% ( 8) 00:07:34.148 11393.182 - 11443.594: 82.5596% ( 3) 00:07:34.148 11443.594 - 11494.006: 82.6115% ( 8) 00:07:34.148 11494.006 - 11544.418: 82.6439% ( 5) 00:07:34.148 11544.418 - 11594.831: 82.6828% ( 6) 00:07:34.148 11594.831 - 11645.243: 82.7541% ( 11) 00:07:34.148 11645.243 - 11695.655: 82.8255% ( 11) 00:07:34.148 11695.655 - 11746.068: 82.8644% ( 6) 00:07:34.148 11746.068 - 11796.480: 82.8773% ( 2) 00:07:34.148 11796.480 - 11846.892: 82.8838% ( 1) 00:07:34.148 11846.892 - 11897.305: 82.8968% ( 2) 00:07:34.148 11897.305 - 11947.717: 82.9098% ( 2) 00:07:34.148 11947.717 - 11998.129: 82.9162% ( 1) 00:07:34.148 11998.129 - 12048.542: 82.9357% ( 3) 00:07:34.148 12048.542 - 12098.954: 82.9487% ( 2) 00:07:34.148 12098.954 - 12149.366: 82.9616% ( 2) 00:07:34.148 12149.366 - 12199.778: 82.9746% ( 2) 00:07:34.148 12199.778 - 12250.191: 82.9876% ( 2) 00:07:34.148 12250.191 - 12300.603: 83.0005% ( 2) 00:07:34.148 12300.603 - 12351.015: 83.0200% ( 3) 00:07:34.148 12351.015 - 12401.428: 83.0265% ( 1) 00:07:34.148 12401.428 - 12451.840: 83.0459% ( 3) 00:07:34.148 12451.840 - 12502.252: 83.0524% ( 1) 00:07:34.148 12502.252 - 12552.665: 83.0848% ( 5) 00:07:34.148 12552.665 - 12603.077: 83.1107% ( 4) 00:07:34.148 12603.077 - 12653.489: 83.3247% ( 33) 00:07:34.148 12653.489 - 12703.902: 83.3441% ( 3) 00:07:34.148 12703.902 - 12754.314: 83.3571% ( 2) 00:07:34.148 12754.314 - 12804.726: 83.3701% ( 2) 00:07:34.148 12804.726 - 12855.138: 83.3766% ( 1) 00:07:34.148 12855.138 - 12905.551: 83.3960% ( 3) 00:07:34.148 12905.551 - 13006.375: 83.4479% ( 8) 00:07:34.148 13006.375 - 13107.200: 83.4997% ( 8) 00:07:34.148 13107.200 - 13208.025: 83.5775% ( 12) 00:07:34.148 13208.025 - 13308.849: 83.8174% ( 37) 00:07:34.148 13308.849 - 13409.674: 84.1481% ( 51) 00:07:34.148 13409.674 - 13510.498: 84.5371% ( 60) 00:07:34.148 13510.498 - 13611.323: 84.7575% ( 34) 00:07:34.148 13611.323 - 13712.148: 85.2697% ( 79) 00:07:34.148 13712.148 - 13812.972: 86.0088% ( 114) 00:07:34.148 13812.972 - 13913.797: 87.0137% ( 155) 00:07:34.148 13913.797 - 14014.622: 87.9279% ( 141) 00:07:34.148 14014.622 - 14115.446: 90.4046% ( 382) 00:07:34.148 14115.446 - 14216.271: 93.1211% ( 419) 00:07:34.148 14216.271 - 14317.095: 94.5021% ( 213) 00:07:34.148 14317.095 - 14417.920: 95.8506% ( 208) 00:07:34.148 14417.920 - 14518.745: 96.6935% ( 130) 00:07:34.148 14518.745 - 14619.569: 97.2640% ( 88) 00:07:34.148 14619.569 - 14720.394: 97.7956% ( 82) 00:07:34.148 14720.394 - 14821.218: 98.1068% ( 48) 00:07:34.148 14821.218 - 14922.043: 98.3208% ( 33) 00:07:34.148 14922.043 - 15022.868: 98.4375% ( 18) 00:07:34.148 15022.868 - 15123.692: 98.5607% ( 19) 00:07:34.148 15123.692 - 15224.517: 98.6644% ( 16) 00:07:34.148 15224.517 - 15325.342: 98.7552% ( 14) 00:07:34.148 15325.342 - 15426.166: 98.8200% ( 10) 00:07:34.149 15426.166 - 15526.991: 98.8913% ( 11) 00:07:34.149 15526.991 - 15627.815: 98.9367% ( 7) 00:07:34.149 15627.815 - 15728.640: 98.9756% ( 6) 00:07:34.149 15728.640 - 15829.465: 99.0145% ( 6) 00:07:34.149 15829.465 - 15930.289: 99.0534% ( 6) 00:07:34.149 15930.289 - 16031.114: 99.0794% ( 4) 00:07:34.149 16031.114 - 16131.938: 99.0988% ( 3) 00:07:34.149 16131.938 - 16232.763: 99.1183% ( 3) 00:07:34.149 16232.763 - 16333.588: 99.1442% ( 4) 00:07:34.149 16333.588 - 16434.412: 99.1766% ( 5) 00:07:34.149 16434.412 - 16535.237: 99.2220% ( 7) 00:07:34.149 16535.237 - 16636.062: 99.2479% ( 4) 00:07:34.149 16636.062 - 16736.886: 99.2739% ( 4) 00:07:34.149 16736.886 - 16837.711: 99.2998% ( 4) 00:07:34.149 16837.711 - 16938.535: 99.3257% ( 4) 00:07:34.149 16938.535 - 17039.360: 99.3517% ( 4) 00:07:34.149 17039.360 - 17140.185: 99.3776% ( 4) 00:07:34.149 17140.185 - 17241.009: 99.3970% ( 3) 00:07:34.149 17241.009 - 17341.834: 99.4230% ( 4) 00:07:34.149 17341.834 - 17442.658: 99.4489% ( 4) 00:07:34.149 17442.658 - 17543.483: 99.4748% ( 4) 00:07:34.149 17543.483 - 17644.308: 99.5008% ( 4) 00:07:34.149 17644.308 - 17745.132: 99.5267% ( 4) 00:07:34.149 17745.132 - 17845.957: 99.5526% ( 4) 00:07:34.149 17845.957 - 17946.782: 99.5786% ( 4) 00:07:34.149 17946.782 - 18047.606: 99.5851% ( 1) 00:07:34.149 24601.206 - 24702.031: 99.6045% ( 3) 00:07:34.149 24702.031 - 24802.855: 99.6304% ( 4) 00:07:34.149 24802.855 - 24903.680: 99.6564% ( 4) 00:07:34.149 24903.680 - 25004.505: 99.6758% ( 3) 00:07:34.149 25004.505 - 25105.329: 99.7018% ( 4) 00:07:34.149 25105.329 - 25206.154: 99.7277% ( 4) 00:07:34.149 25206.154 - 25306.978: 99.7536% ( 4) 00:07:34.149 25306.978 - 25407.803: 99.7796% ( 4) 00:07:34.149 25407.803 - 25508.628: 99.8055% ( 4) 00:07:34.149 25508.628 - 25609.452: 99.8314% ( 4) 00:07:34.149 25609.452 - 25710.277: 99.8574% ( 4) 00:07:34.149 25710.277 - 25811.102: 99.8833% ( 4) 00:07:34.149 25811.102 - 26012.751: 99.9287% ( 7) 00:07:34.149 26012.751 - 26214.400: 99.9805% ( 8) 00:07:34.149 26214.400 - 26416.049: 100.0000% ( 3) 00:07:34.149 00:07:34.149 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:34.149 ============================================================================== 00:07:34.149 Range in us Cumulative IO count 00:07:34.149 4940.406 - 4965.612: 0.0519% ( 8) 00:07:34.149 4965.612 - 4990.818: 0.1297% ( 12) 00:07:34.149 4990.818 - 5016.025: 0.1686% ( 6) 00:07:34.149 5016.025 - 5041.231: 0.1815% ( 2) 00:07:34.149 5041.231 - 5066.437: 0.1945% ( 2) 00:07:34.149 5066.437 - 5091.643: 0.2075% ( 2) 00:07:34.149 5091.643 - 5116.849: 0.2204% ( 2) 00:07:34.149 5116.849 - 5142.055: 0.2269% ( 1) 00:07:34.149 5142.055 - 5167.262: 0.2399% ( 2) 00:07:34.149 5167.262 - 5192.468: 0.2529% ( 2) 00:07:34.149 5192.468 - 5217.674: 0.2658% ( 2) 00:07:34.149 5217.674 - 5242.880: 0.2788% ( 2) 00:07:34.149 5242.880 - 5268.086: 0.2853% ( 1) 00:07:34.149 5268.086 - 5293.292: 0.2982% ( 2) 00:07:34.149 5293.292 - 5318.498: 0.3047% ( 1) 00:07:34.149 5318.498 - 5343.705: 0.3177% ( 2) 00:07:34.149 5343.705 - 5368.911: 0.3307% ( 2) 00:07:34.149 5368.911 - 5394.117: 0.3436% ( 2) 00:07:34.149 5394.117 - 5419.323: 0.3631% ( 3) 00:07:34.149 5419.323 - 5444.529: 0.3760% ( 2) 00:07:34.149 5444.529 - 5469.735: 0.3955% ( 3) 00:07:34.149 5469.735 - 5494.942: 0.4085% ( 2) 00:07:34.149 5494.942 - 5520.148: 0.4149% ( 1) 00:07:34.149 5772.209 - 5797.415: 0.4214% ( 1) 00:07:34.149 5797.415 - 5822.622: 0.4279% ( 1) 00:07:34.149 5822.622 - 5847.828: 0.4344% ( 1) 00:07:34.149 5898.240 - 5923.446: 0.4474% ( 2) 00:07:34.149 5923.446 - 5948.652: 0.4668% ( 3) 00:07:34.149 5948.652 - 5973.858: 0.4992% ( 5) 00:07:34.149 5973.858 - 5999.065: 0.5770% ( 12) 00:07:34.149 5999.065 - 6024.271: 0.6354% ( 9) 00:07:34.149 6024.271 - 6049.477: 0.7521% ( 18) 00:07:34.149 6049.477 - 6074.683: 0.8817% ( 20) 00:07:34.149 6074.683 - 6099.889: 1.0827% ( 31) 00:07:34.149 6099.889 - 6125.095: 1.3096% ( 35) 00:07:34.149 6125.095 - 6150.302: 1.6533% ( 53) 00:07:34.149 6150.302 - 6175.508: 2.2238% ( 88) 00:07:34.149 6175.508 - 6200.714: 2.5415% ( 49) 00:07:34.149 6200.714 - 6225.920: 2.9435% ( 62) 00:07:34.149 6225.920 - 6251.126: 3.4492% ( 78) 00:07:34.149 6251.126 - 6276.332: 3.9938% ( 84) 00:07:34.149 6276.332 - 6301.538: 4.7394% ( 115) 00:07:34.149 6301.538 - 6326.745: 5.5174% ( 120) 00:07:34.149 6326.745 - 6351.951: 6.1463% ( 97) 00:07:34.149 6351.951 - 6377.157: 6.7233% ( 89) 00:07:34.149 6377.157 - 6402.363: 7.5791% ( 132) 00:07:34.149 6402.363 - 6427.569: 8.4803% ( 139) 00:07:34.149 6427.569 - 6452.775: 9.5954% ( 172) 00:07:34.149 6452.775 - 6503.188: 12.5195% ( 451) 00:07:34.149 6503.188 - 6553.600: 17.2005% ( 722) 00:07:34.149 6553.600 - 6604.012: 23.1457% ( 917) 00:07:34.149 6604.012 - 6654.425: 30.0246% ( 1061) 00:07:34.149 6654.425 - 6704.837: 37.0526% ( 1084) 00:07:34.149 6704.837 - 6755.249: 42.7645% ( 881) 00:07:34.149 6755.249 - 6805.662: 47.3094% ( 701) 00:07:34.149 6805.662 - 6856.074: 50.3307% ( 466) 00:07:34.149 6856.074 - 6906.486: 52.9175% ( 399) 00:07:34.149 6906.486 - 6956.898: 55.2710% ( 363) 00:07:34.149 6956.898 - 7007.311: 56.7038% ( 221) 00:07:34.149 7007.311 - 7057.723: 57.9033% ( 185) 00:07:34.149 7057.723 - 7108.135: 58.8434% ( 145) 00:07:34.149 7108.135 - 7158.548: 59.7510% ( 140) 00:07:34.149 7158.548 - 7208.960: 60.5290% ( 120) 00:07:34.149 7208.960 - 7259.372: 61.2811% ( 116) 00:07:34.149 7259.372 - 7309.785: 61.8581% ( 89) 00:07:34.149 7309.785 - 7360.197: 62.3768% ( 80) 00:07:34.149 7360.197 - 7410.609: 63.0381% ( 102) 00:07:34.149 7410.609 - 7461.022: 64.2051% ( 180) 00:07:34.149 7461.022 - 7511.434: 65.2943% ( 168) 00:07:34.149 7511.434 - 7561.846: 66.5392% ( 192) 00:07:34.149 7561.846 - 7612.258: 67.4533% ( 141) 00:07:34.149 7612.258 - 7662.671: 68.2248% ( 119) 00:07:34.149 7662.671 - 7713.083: 69.1714% ( 146) 00:07:34.149 7713.083 - 7763.495: 69.8911% ( 111) 00:07:34.149 7763.495 - 7813.908: 70.5394% ( 100) 00:07:34.149 7813.908 - 7864.320: 71.1100% ( 88) 00:07:34.149 7864.320 - 7914.732: 71.8166% ( 109) 00:07:34.149 7914.732 - 7965.145: 72.2251% ( 63) 00:07:34.149 7965.145 - 8015.557: 72.7697% ( 84) 00:07:34.149 8015.557 - 8065.969: 73.6061% ( 129) 00:07:34.149 8065.969 - 8116.382: 74.3646% ( 117) 00:07:34.149 8116.382 - 8166.794: 74.9416% ( 89) 00:07:34.149 8166.794 - 8217.206: 75.4279% ( 75) 00:07:34.149 8217.206 - 8267.618: 76.2383% ( 125) 00:07:34.149 8267.618 - 8318.031: 76.8867% ( 100) 00:07:34.149 8318.031 - 8368.443: 77.4961% ( 94) 00:07:34.149 8368.443 - 8418.855: 77.8527% ( 55) 00:07:34.149 8418.855 - 8469.268: 78.1769% ( 50) 00:07:34.149 8469.268 - 8519.680: 78.4297% ( 39) 00:07:34.149 8519.680 - 8570.092: 78.7604% ( 51) 00:07:34.149 8570.092 - 8620.505: 78.9030% ( 22) 00:07:34.149 8620.505 - 8670.917: 79.0132% ( 17) 00:07:34.149 8670.917 - 8721.329: 79.0910% ( 12) 00:07:34.149 8721.329 - 8771.742: 79.1688% ( 12) 00:07:34.149 8771.742 - 8822.154: 79.2466% ( 12) 00:07:34.149 8822.154 - 8872.566: 79.4282% ( 28) 00:07:34.149 8872.566 - 8922.978: 79.6356% ( 32) 00:07:34.149 8922.978 - 8973.391: 79.7718% ( 21) 00:07:34.149 8973.391 - 9023.803: 79.8237% ( 8) 00:07:34.149 9023.803 - 9074.215: 80.0311% ( 32) 00:07:34.149 9074.215 - 9124.628: 80.1478% ( 18) 00:07:34.149 9124.628 - 9175.040: 80.2191% ( 11) 00:07:34.149 9175.040 - 9225.452: 80.2905% ( 11) 00:07:34.149 9225.452 - 9275.865: 80.3683% ( 12) 00:07:34.149 9275.865 - 9326.277: 80.4266% ( 9) 00:07:34.149 9326.277 - 9376.689: 80.4850% ( 9) 00:07:34.149 9376.689 - 9427.102: 80.5368% ( 8) 00:07:34.149 9427.102 - 9477.514: 80.5952% ( 9) 00:07:34.149 9477.514 - 9527.926: 80.6276% ( 5) 00:07:34.149 9527.926 - 9578.338: 80.6600% ( 5) 00:07:34.149 9578.338 - 9628.751: 80.7248% ( 10) 00:07:34.149 9628.751 - 9679.163: 80.8091% ( 13) 00:07:34.149 9679.163 - 9729.575: 80.9582% ( 23) 00:07:34.149 9729.575 - 9779.988: 81.0879% ( 20) 00:07:34.149 9779.988 - 9830.400: 81.1787% ( 14) 00:07:34.149 9830.400 - 9880.812: 81.2435% ( 10) 00:07:34.149 9880.812 - 9931.225: 81.2889% ( 7) 00:07:34.149 9931.225 - 9981.637: 81.3667% ( 12) 00:07:34.149 9981.637 - 10032.049: 81.4510% ( 13) 00:07:34.149 10032.049 - 10082.462: 81.5029% ( 8) 00:07:34.149 10082.462 - 10132.874: 81.5418% ( 6) 00:07:34.149 10132.874 - 10183.286: 81.5871% ( 7) 00:07:34.149 10183.286 - 10233.698: 81.6260% ( 6) 00:07:34.149 10233.698 - 10284.111: 81.6909% ( 10) 00:07:34.149 10284.111 - 10334.523: 81.9243% ( 36) 00:07:34.149 10334.523 - 10384.935: 81.9761% ( 8) 00:07:34.149 10384.935 - 10435.348: 82.0150% ( 6) 00:07:34.149 10435.348 - 10485.760: 82.0539% ( 6) 00:07:34.149 10485.760 - 10536.172: 82.0928% ( 6) 00:07:34.149 10536.172 - 10586.585: 82.1188% ( 4) 00:07:34.149 10586.585 - 10636.997: 82.1382% ( 3) 00:07:34.149 10636.997 - 10687.409: 82.1512% ( 2) 00:07:34.149 10687.409 - 10737.822: 82.1577% ( 1) 00:07:34.149 10838.646 - 10889.058: 82.1706% ( 2) 00:07:34.149 10889.058 - 10939.471: 82.2225% ( 8) 00:07:34.149 10939.471 - 10989.883: 82.2420% ( 3) 00:07:34.149 10989.883 - 11040.295: 82.2614% ( 3) 00:07:34.150 11040.295 - 11090.708: 82.2744% ( 2) 00:07:34.150 11090.708 - 11141.120: 82.2938% ( 3) 00:07:34.150 11141.120 - 11191.532: 82.3133% ( 3) 00:07:34.150 11191.532 - 11241.945: 82.3457% ( 5) 00:07:34.150 11241.945 - 11292.357: 82.3846% ( 6) 00:07:34.150 11292.357 - 11342.769: 82.4235% ( 6) 00:07:34.150 11342.769 - 11393.182: 82.4948% ( 11) 00:07:34.150 11393.182 - 11443.594: 82.5661% ( 11) 00:07:34.150 11443.594 - 11494.006: 82.6569% ( 14) 00:07:34.150 11494.006 - 11544.418: 82.7347% ( 12) 00:07:34.150 11544.418 - 11594.831: 82.7866% ( 8) 00:07:34.150 11594.831 - 11645.243: 82.8320% ( 7) 00:07:34.150 11645.243 - 11695.655: 82.8384% ( 1) 00:07:34.150 11695.655 - 11746.068: 82.8514% ( 2) 00:07:34.150 11746.068 - 11796.480: 82.8644% ( 2) 00:07:34.150 11796.480 - 11846.892: 82.8709% ( 1) 00:07:34.150 11846.892 - 11897.305: 82.8838% ( 2) 00:07:34.150 11897.305 - 11947.717: 82.8968% ( 2) 00:07:34.150 11947.717 - 11998.129: 82.9098% ( 2) 00:07:34.150 11998.129 - 12048.542: 82.9162% ( 1) 00:07:34.150 12048.542 - 12098.954: 82.9292% ( 2) 00:07:34.150 12098.954 - 12149.366: 82.9357% ( 1) 00:07:34.150 12149.366 - 12199.778: 82.9487% ( 2) 00:07:34.150 12199.778 - 12250.191: 82.9616% ( 2) 00:07:34.150 12250.191 - 12300.603: 82.9811% ( 3) 00:07:34.150 12300.603 - 12351.015: 83.0070% ( 4) 00:07:34.150 12351.015 - 12401.428: 83.0135% ( 1) 00:07:34.150 12401.428 - 12451.840: 83.0329% ( 3) 00:07:34.150 12451.840 - 12502.252: 83.0394% ( 1) 00:07:34.150 12502.252 - 12552.665: 83.0589% ( 3) 00:07:34.150 12552.665 - 12603.077: 83.0654% ( 1) 00:07:34.150 12603.077 - 12653.489: 83.0978% ( 5) 00:07:34.150 12653.489 - 12703.902: 83.1626% ( 10) 00:07:34.150 12703.902 - 12754.314: 83.3636% ( 31) 00:07:34.150 12754.314 - 12804.726: 83.3701% ( 1) 00:07:34.150 12804.726 - 12855.138: 83.3895% ( 3) 00:07:34.150 12855.138 - 12905.551: 83.4090% ( 3) 00:07:34.150 12905.551 - 13006.375: 83.5775% ( 26) 00:07:34.150 13006.375 - 13107.200: 83.8109% ( 36) 00:07:34.150 13107.200 - 13208.025: 83.9990% ( 29) 00:07:34.150 13208.025 - 13308.849: 84.1935% ( 30) 00:07:34.150 13308.849 - 13409.674: 84.4074% ( 33) 00:07:34.150 13409.674 - 13510.498: 84.6538% ( 38) 00:07:34.150 13510.498 - 13611.323: 84.9131% ( 40) 00:07:34.150 13611.323 - 13712.148: 85.3670% ( 70) 00:07:34.150 13712.148 - 13812.972: 86.1515% ( 121) 00:07:34.150 13812.972 - 13913.797: 87.0073% ( 132) 00:07:34.150 13913.797 - 14014.622: 88.1094% ( 170) 00:07:34.150 14014.622 - 14115.446: 90.0609% ( 301) 00:07:34.150 14115.446 - 14216.271: 93.6463% ( 553) 00:07:34.150 14216.271 - 14317.095: 94.7355% ( 168) 00:07:34.150 14317.095 - 14417.920: 96.1618% ( 220) 00:07:34.150 14417.920 - 14518.745: 97.0306% ( 134) 00:07:34.150 14518.745 - 14619.569: 97.5882% ( 86) 00:07:34.150 14619.569 - 14720.394: 98.0290% ( 68) 00:07:34.150 14720.394 - 14821.218: 98.3662% ( 52) 00:07:34.150 14821.218 - 14922.043: 98.5542% ( 29) 00:07:34.150 14922.043 - 15022.868: 98.6839% ( 20) 00:07:34.150 15022.868 - 15123.692: 98.7811% ( 15) 00:07:34.150 15123.692 - 15224.517: 98.8719% ( 14) 00:07:34.150 15224.517 - 15325.342: 98.9432% ( 11) 00:07:34.150 15325.342 - 15426.166: 99.0145% ( 11) 00:07:34.150 15426.166 - 15526.991: 99.0858% ( 11) 00:07:34.150 15526.991 - 15627.815: 99.1053% ( 3) 00:07:34.150 15627.815 - 15728.640: 99.1247% ( 3) 00:07:34.150 15728.640 - 15829.465: 99.1442% ( 3) 00:07:34.150 15829.465 - 15930.289: 99.1572% ( 2) 00:07:34.150 15930.289 - 16031.114: 99.1701% ( 2) 00:07:34.150 17039.360 - 17140.185: 99.1766% ( 1) 00:07:34.150 17140.185 - 17241.009: 99.1831% ( 1) 00:07:34.150 17341.834 - 17442.658: 99.1896% ( 1) 00:07:34.150 17442.658 - 17543.483: 99.1961% ( 1) 00:07:34.150 17543.483 - 17644.308: 99.2025% ( 1) 00:07:34.150 17644.308 - 17745.132: 99.2350% ( 5) 00:07:34.150 17745.132 - 17845.957: 99.2674% ( 5) 00:07:34.150 17845.957 - 17946.782: 99.2933% ( 4) 00:07:34.150 17946.782 - 18047.606: 99.3257% ( 5) 00:07:34.150 18047.606 - 18148.431: 99.3517% ( 4) 00:07:34.150 18148.431 - 18249.255: 99.3906% ( 6) 00:07:34.150 18249.255 - 18350.080: 99.4165% ( 4) 00:07:34.150 18350.080 - 18450.905: 99.4424% ( 4) 00:07:34.150 18450.905 - 18551.729: 99.4748% ( 5) 00:07:34.150 18551.729 - 18652.554: 99.4943% ( 3) 00:07:34.150 18652.554 - 18753.378: 99.5137% ( 3) 00:07:34.150 18753.378 - 18854.203: 99.5397% ( 4) 00:07:34.150 18854.203 - 18955.028: 99.5591% ( 3) 00:07:34.150 18955.028 - 19055.852: 99.5786% ( 3) 00:07:34.150 19055.852 - 19156.677: 99.5851% ( 1) 00:07:34.150 25004.505 - 25105.329: 99.5915% ( 1) 00:07:34.150 25105.329 - 25206.154: 99.6045% ( 2) 00:07:34.150 25206.154 - 25306.978: 99.6240% ( 3) 00:07:34.150 25306.978 - 25407.803: 99.6953% ( 11) 00:07:34.150 25407.803 - 25508.628: 99.7860% ( 14) 00:07:34.150 25508.628 - 25609.452: 99.8379% ( 8) 00:07:34.150 25609.452 - 25710.277: 99.8898% ( 8) 00:07:34.150 25710.277 - 25811.102: 99.9352% ( 7) 00:07:34.150 25811.102 - 26012.751: 99.9935% ( 9) 00:07:34.150 26012.751 - 26214.400: 100.0000% ( 1) 00:07:34.150 00:07:34.150 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:34.150 ============================================================================== 00:07:34.150 Range in us Cumulative IO count 00:07:34.150 4511.902 - 4537.108: 0.0195% ( 3) 00:07:34.150 4537.108 - 4562.314: 0.0584% ( 6) 00:07:34.150 4562.314 - 4587.520: 0.1102% ( 8) 00:07:34.150 4587.520 - 4612.726: 0.1686% ( 9) 00:07:34.150 4612.726 - 4637.932: 0.2010% ( 5) 00:07:34.150 4637.932 - 4663.138: 0.2140% ( 2) 00:07:34.150 4663.138 - 4688.345: 0.2269% ( 2) 00:07:34.150 4688.345 - 4713.551: 0.2399% ( 2) 00:07:34.150 4713.551 - 4738.757: 0.2529% ( 2) 00:07:34.150 4738.757 - 4763.963: 0.2658% ( 2) 00:07:34.150 4763.963 - 4789.169: 0.2788% ( 2) 00:07:34.150 4789.169 - 4814.375: 0.2853% ( 1) 00:07:34.150 4814.375 - 4839.582: 0.2982% ( 2) 00:07:34.150 4839.582 - 4864.788: 0.3112% ( 2) 00:07:34.150 4864.788 - 4889.994: 0.3242% ( 2) 00:07:34.150 4889.994 - 4915.200: 0.3371% ( 2) 00:07:34.150 4915.200 - 4940.406: 0.3501% ( 2) 00:07:34.150 4940.406 - 4965.612: 0.3631% ( 2) 00:07:34.150 4965.612 - 4990.818: 0.3760% ( 2) 00:07:34.150 4990.818 - 5016.025: 0.3890% ( 2) 00:07:34.150 5016.025 - 5041.231: 0.4020% ( 2) 00:07:34.150 5041.231 - 5066.437: 0.4149% ( 2) 00:07:34.150 5822.622 - 5847.828: 0.4214% ( 1) 00:07:34.150 5923.446 - 5948.652: 0.4603% ( 6) 00:07:34.150 5948.652 - 5973.858: 0.5252% ( 10) 00:07:34.150 5973.858 - 5999.065: 0.6159% ( 14) 00:07:34.150 5999.065 - 6024.271: 0.7067% ( 14) 00:07:34.150 6024.271 - 6049.477: 0.8364% ( 20) 00:07:34.150 6049.477 - 6074.683: 0.9920% ( 24) 00:07:34.150 6074.683 - 6099.889: 1.1605% ( 26) 00:07:34.150 6099.889 - 6125.095: 1.3226% ( 25) 00:07:34.150 6125.095 - 6150.302: 1.6662% ( 53) 00:07:34.150 6150.302 - 6175.508: 2.0552% ( 60) 00:07:34.150 6175.508 - 6200.714: 2.4053% ( 54) 00:07:34.150 6200.714 - 6225.920: 2.8268% ( 65) 00:07:34.150 6225.920 - 6251.126: 3.2287% ( 62) 00:07:34.150 6251.126 - 6276.332: 3.6566% ( 66) 00:07:34.150 6276.332 - 6301.538: 4.0651% ( 63) 00:07:34.150 6301.538 - 6326.745: 4.8237% ( 117) 00:07:34.150 6326.745 - 6351.951: 5.5303% ( 109) 00:07:34.150 6351.951 - 6377.157: 6.1398% ( 94) 00:07:34.150 6377.157 - 6402.363: 6.8205% ( 105) 00:07:34.150 6402.363 - 6427.569: 7.7930% ( 150) 00:07:34.150 6427.569 - 6452.775: 8.8174% ( 158) 00:07:34.150 6452.775 - 6503.188: 12.2731% ( 533) 00:07:34.150 6503.188 - 6553.600: 17.0124% ( 731) 00:07:34.150 6553.600 - 6604.012: 23.3597% ( 979) 00:07:34.150 6604.012 - 6654.425: 30.3942% ( 1085) 00:07:34.150 6654.425 - 6704.837: 37.2601% ( 1059) 00:07:34.150 6704.837 - 6755.249: 43.4647% ( 957) 00:07:34.150 6755.249 - 6805.662: 46.8945% ( 529) 00:07:34.150 6805.662 - 6856.074: 50.8493% ( 610) 00:07:34.150 6856.074 - 6906.486: 53.4881% ( 407) 00:07:34.150 6906.486 - 6956.898: 55.1867% ( 262) 00:07:34.150 6956.898 - 7007.311: 56.8076% ( 250) 00:07:34.150 7007.311 - 7057.723: 57.7995% ( 153) 00:07:34.150 7057.723 - 7108.135: 59.1416% ( 207) 00:07:34.150 7108.135 - 7158.548: 60.2697% ( 174) 00:07:34.150 7158.548 - 7208.960: 61.0542% ( 121) 00:07:34.150 7208.960 - 7259.372: 61.6118% ( 86) 00:07:34.150 7259.372 - 7309.785: 62.2407% ( 97) 00:07:34.150 7309.785 - 7360.197: 62.9603% ( 111) 00:07:34.150 7360.197 - 7410.609: 63.6865% ( 112) 00:07:34.150 7410.609 - 7461.022: 64.6395% ( 147) 00:07:34.150 7461.022 - 7511.434: 65.3916% ( 116) 00:07:34.150 7511.434 - 7561.846: 66.6688% ( 197) 00:07:34.150 7561.846 - 7612.258: 67.4598% ( 122) 00:07:34.150 7612.258 - 7662.671: 68.2378% ( 120) 00:07:34.150 7662.671 - 7713.083: 69.1844% ( 146) 00:07:34.150 7713.083 - 7763.495: 69.8522% ( 103) 00:07:34.150 7763.495 - 7813.908: 70.5070% ( 101) 00:07:34.150 7813.908 - 7864.320: 71.3304% ( 127) 00:07:34.150 7864.320 - 7914.732: 72.3613% ( 159) 00:07:34.150 7914.732 - 7965.145: 72.9383% ( 89) 00:07:34.150 7965.145 - 8015.557: 73.3597% ( 65) 00:07:34.150 8015.557 - 8065.969: 73.8589% ( 77) 00:07:34.150 8065.969 - 8116.382: 74.4165% ( 86) 00:07:34.151 8116.382 - 8166.794: 74.9092% ( 76) 00:07:34.151 8166.794 - 8217.206: 75.5965% ( 106) 00:07:34.151 8217.206 - 8267.618: 76.0827% ( 75) 00:07:34.151 8267.618 - 8318.031: 76.6922% ( 94) 00:07:34.151 8318.031 - 8368.443: 77.1784% ( 75) 00:07:34.151 8368.443 - 8418.855: 77.6712% ( 76) 00:07:34.151 8418.855 - 8469.268: 77.8592% ( 29) 00:07:34.151 8469.268 - 8519.680: 78.0407% ( 28) 00:07:34.151 8519.680 - 8570.092: 78.2352% ( 30) 00:07:34.151 8570.092 - 8620.505: 78.4427% ( 32) 00:07:34.151 8620.505 - 8670.917: 78.8900% ( 69) 00:07:34.151 8670.917 - 8721.329: 79.1105% ( 34) 00:07:34.151 8721.329 - 8771.742: 79.2272% ( 18) 00:07:34.151 8771.742 - 8822.154: 79.3504% ( 19) 00:07:34.151 8822.154 - 8872.566: 79.4282% ( 12) 00:07:34.151 8872.566 - 8922.978: 79.4735% ( 7) 00:07:34.151 8922.978 - 8973.391: 79.5319% ( 9) 00:07:34.151 8973.391 - 9023.803: 79.6227% ( 14) 00:07:34.151 9023.803 - 9074.215: 79.8042% ( 28) 00:07:34.151 9074.215 - 9124.628: 80.0182% ( 33) 00:07:34.151 9124.628 - 9175.040: 80.2321% ( 33) 00:07:34.151 9175.040 - 9225.452: 80.4072% ( 27) 00:07:34.151 9225.452 - 9275.865: 80.6146% ( 32) 00:07:34.151 9275.865 - 9326.277: 80.7378% ( 19) 00:07:34.151 9326.277 - 9376.689: 80.8675% ( 20) 00:07:34.151 9376.689 - 9427.102: 80.9647% ( 15) 00:07:34.151 9427.102 - 9477.514: 81.0231% ( 9) 00:07:34.151 9477.514 - 9527.926: 81.0814% ( 9) 00:07:34.151 9527.926 - 9578.338: 81.1333% ( 8) 00:07:34.151 9578.338 - 9628.751: 81.1787% ( 7) 00:07:34.151 9628.751 - 9679.163: 81.2176% ( 6) 00:07:34.151 9679.163 - 9729.575: 81.2695% ( 8) 00:07:34.151 9729.575 - 9779.988: 81.3084% ( 6) 00:07:34.151 9779.988 - 9830.400: 81.3473% ( 6) 00:07:34.151 9830.400 - 9880.812: 81.3926% ( 7) 00:07:34.151 9880.812 - 9931.225: 81.4186% ( 4) 00:07:34.151 9931.225 - 9981.637: 81.6260% ( 32) 00:07:34.151 9981.637 - 10032.049: 81.6520% ( 4) 00:07:34.151 10032.049 - 10082.462: 81.6779% ( 4) 00:07:34.151 10082.462 - 10132.874: 81.7038% ( 4) 00:07:34.151 10132.874 - 10183.286: 81.7363% ( 5) 00:07:34.151 10183.286 - 10233.698: 81.8011% ( 10) 00:07:34.151 10233.698 - 10284.111: 81.8465% ( 7) 00:07:34.151 10284.111 - 10334.523: 81.8594% ( 2) 00:07:34.151 10334.523 - 10384.935: 81.8789% ( 3) 00:07:34.151 10384.935 - 10435.348: 81.8983% ( 3) 00:07:34.151 10435.348 - 10485.760: 81.9243% ( 4) 00:07:34.151 10485.760 - 10536.172: 81.9502% ( 4) 00:07:34.151 10536.172 - 10586.585: 81.9761% ( 4) 00:07:34.151 10586.585 - 10636.997: 81.9956% ( 3) 00:07:34.151 10636.997 - 10687.409: 82.0280% ( 5) 00:07:34.151 10687.409 - 10737.822: 82.0475% ( 3) 00:07:34.151 10737.822 - 10788.234: 82.0928% ( 7) 00:07:34.151 10788.234 - 10838.646: 82.1836% ( 14) 00:07:34.151 10838.646 - 10889.058: 82.2679% ( 13) 00:07:34.151 10889.058 - 10939.471: 82.3716% ( 16) 00:07:34.151 10939.471 - 10989.883: 82.4105% ( 6) 00:07:34.151 10989.883 - 11040.295: 82.4235% ( 2) 00:07:34.151 11040.295 - 11090.708: 82.4365% ( 2) 00:07:34.151 11090.708 - 11141.120: 82.4559% ( 3) 00:07:34.151 11141.120 - 11191.532: 82.4624% ( 1) 00:07:34.151 11191.532 - 11241.945: 82.4754% ( 2) 00:07:34.151 11241.945 - 11292.357: 82.4818% ( 1) 00:07:34.151 11292.357 - 11342.769: 82.4948% ( 2) 00:07:34.151 11342.769 - 11393.182: 82.5078% ( 2) 00:07:34.151 11393.182 - 11443.594: 82.5207% ( 2) 00:07:34.151 11443.594 - 11494.006: 82.5532% ( 5) 00:07:34.151 11494.006 - 11544.418: 82.5726% ( 3) 00:07:34.151 11544.418 - 11594.831: 82.6050% ( 5) 00:07:34.151 11594.831 - 11645.243: 82.6245% ( 3) 00:07:34.151 11645.243 - 11695.655: 82.6569% ( 5) 00:07:34.151 11695.655 - 11746.068: 82.6763% ( 3) 00:07:34.151 11746.068 - 11796.480: 82.6893% ( 2) 00:07:34.151 11796.480 - 11846.892: 82.7023% ( 2) 00:07:34.151 11846.892 - 11897.305: 82.7282% ( 4) 00:07:34.151 11897.305 - 11947.717: 82.7736% ( 7) 00:07:34.151 11947.717 - 11998.129: 82.8255% ( 8) 00:07:34.151 11998.129 - 12048.542: 82.8773% ( 8) 00:07:34.151 12048.542 - 12098.954: 82.9357% ( 9) 00:07:34.151 12098.954 - 12149.366: 82.9746% ( 6) 00:07:34.151 12149.366 - 12199.778: 82.9811% ( 1) 00:07:34.151 12250.191 - 12300.603: 82.9876% ( 1) 00:07:34.151 12300.603 - 12351.015: 83.0135% ( 4) 00:07:34.151 12351.015 - 12401.428: 83.0265% ( 2) 00:07:34.151 12401.428 - 12451.840: 83.0394% ( 2) 00:07:34.151 12451.840 - 12502.252: 83.0718% ( 5) 00:07:34.151 12502.252 - 12552.665: 83.0913% ( 3) 00:07:34.151 12552.665 - 12603.077: 83.1432% ( 8) 00:07:34.151 12603.077 - 12653.489: 83.2015% ( 9) 00:07:34.151 12653.489 - 12703.902: 83.2858% ( 13) 00:07:34.151 12703.902 - 12754.314: 83.5905% ( 47) 00:07:34.151 12754.314 - 12804.726: 83.7202% ( 20) 00:07:34.151 12804.726 - 12855.138: 83.8045% ( 13) 00:07:34.151 12855.138 - 12905.551: 83.8693% ( 10) 00:07:34.151 12905.551 - 13006.375: 84.0184% ( 23) 00:07:34.151 13006.375 - 13107.200: 84.0962% ( 12) 00:07:34.151 13107.200 - 13208.025: 84.1805% ( 13) 00:07:34.151 13208.025 - 13308.849: 84.3296% ( 23) 00:07:34.151 13308.849 - 13409.674: 84.5371% ( 32) 00:07:34.151 13409.674 - 13510.498: 84.8288% ( 45) 00:07:34.151 13510.498 - 13611.323: 85.1789% ( 54) 00:07:34.151 13611.323 - 13712.148: 85.5874% ( 63) 00:07:34.151 13712.148 - 13812.972: 86.2293% ( 99) 00:07:34.151 13812.972 - 13913.797: 87.1045% ( 135) 00:07:34.151 13913.797 - 14014.622: 87.9668% ( 133) 00:07:34.151 14014.622 - 14115.446: 89.8600% ( 292) 00:07:34.151 14115.446 - 14216.271: 93.6527% ( 585) 00:07:34.151 14216.271 - 14317.095: 94.6771% ( 158) 00:07:34.151 14317.095 - 14417.920: 96.1035% ( 220) 00:07:34.151 14417.920 - 14518.745: 96.9463% ( 130) 00:07:34.151 14518.745 - 14619.569: 97.5363% ( 91) 00:07:34.151 14619.569 - 14720.394: 98.0226% ( 75) 00:07:34.151 14720.394 - 14821.218: 98.3856% ( 56) 00:07:34.151 14821.218 - 14922.043: 98.6061% ( 34) 00:07:34.151 14922.043 - 15022.868: 98.7422% ( 21) 00:07:34.151 15022.868 - 15123.692: 98.8395% ( 15) 00:07:34.151 15123.692 - 15224.517: 98.9238% ( 13) 00:07:34.151 15224.517 - 15325.342: 98.9691% ( 7) 00:07:34.151 15325.342 - 15426.166: 99.0275% ( 9) 00:07:34.151 15426.166 - 15526.991: 99.0794% ( 8) 00:07:34.151 15526.991 - 15627.815: 99.1053% ( 4) 00:07:34.151 15627.815 - 15728.640: 99.1247% ( 3) 00:07:34.151 15728.640 - 15829.465: 99.1442% ( 3) 00:07:34.151 15829.465 - 15930.289: 99.1572% ( 2) 00:07:34.151 15930.289 - 16031.114: 99.1701% ( 2) 00:07:34.151 17140.185 - 17241.009: 99.1766% ( 1) 00:07:34.151 17644.308 - 17745.132: 99.1896% ( 2) 00:07:34.151 17745.132 - 17845.957: 99.2220% ( 5) 00:07:34.151 17845.957 - 17946.782: 99.2544% ( 5) 00:07:34.151 17946.782 - 18047.606: 99.2803% ( 4) 00:07:34.151 18047.606 - 18148.431: 99.3128% ( 5) 00:07:34.151 18148.431 - 18249.255: 99.3452% ( 5) 00:07:34.151 18249.255 - 18350.080: 99.3711% ( 4) 00:07:34.151 18350.080 - 18450.905: 99.3970% ( 4) 00:07:34.151 18450.905 - 18551.729: 99.4295% ( 5) 00:07:34.151 18551.729 - 18652.554: 99.4619% ( 5) 00:07:34.151 18652.554 - 18753.378: 99.4813% ( 3) 00:07:34.151 18753.378 - 18854.203: 99.5008% ( 3) 00:07:34.151 18854.203 - 18955.028: 99.5267% ( 4) 00:07:34.151 18955.028 - 19055.852: 99.5462% ( 3) 00:07:34.151 19055.852 - 19156.677: 99.5656% ( 3) 00:07:34.151 19156.677 - 19257.502: 99.5851% ( 3) 00:07:34.151 25105.329 - 25206.154: 99.5980% ( 2) 00:07:34.151 25206.154 - 25306.978: 99.6953% ( 15) 00:07:34.151 25306.978 - 25407.803: 99.7666% ( 11) 00:07:34.151 25407.803 - 25508.628: 99.8314% ( 10) 00:07:34.151 25508.628 - 25609.452: 99.8574% ( 4) 00:07:34.151 25609.452 - 25710.277: 99.8898% ( 5) 00:07:34.151 25710.277 - 25811.102: 99.9352% ( 7) 00:07:34.151 25811.102 - 26012.751: 99.9870% ( 8) 00:07:34.151 26012.751 - 26214.400: 100.0000% ( 2) 00:07:34.151 00:07:34.151 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:34.151 ============================================================================== 00:07:34.151 Range in us Cumulative IO count 00:07:34.151 4007.778 - 4032.985: 0.0065% ( 1) 00:07:34.151 4083.397 - 4108.603: 0.0195% ( 2) 00:07:34.152 4108.603 - 4133.809: 0.0713% ( 8) 00:07:34.152 4133.809 - 4159.015: 0.1102% ( 6) 00:07:34.152 4159.015 - 4184.222: 0.1621% ( 8) 00:07:34.152 4184.222 - 4209.428: 0.1880% ( 4) 00:07:34.152 4209.428 - 4234.634: 0.2140% ( 4) 00:07:34.152 4234.634 - 4259.840: 0.2269% ( 2) 00:07:34.152 4259.840 - 4285.046: 0.2399% ( 2) 00:07:34.152 4285.046 - 4310.252: 0.2529% ( 2) 00:07:34.152 4310.252 - 4335.458: 0.2658% ( 2) 00:07:34.152 4335.458 - 4360.665: 0.2723% ( 1) 00:07:34.152 4360.665 - 4385.871: 0.2853% ( 2) 00:07:34.152 4385.871 - 4411.077: 0.2982% ( 2) 00:07:34.152 4411.077 - 4436.283: 0.3112% ( 2) 00:07:34.152 4436.283 - 4461.489: 0.3242% ( 2) 00:07:34.152 4461.489 - 4486.695: 0.3371% ( 2) 00:07:34.152 4486.695 - 4511.902: 0.3501% ( 2) 00:07:34.152 4511.902 - 4537.108: 0.3631% ( 2) 00:07:34.152 4537.108 - 4562.314: 0.3760% ( 2) 00:07:34.152 4587.520 - 4612.726: 0.3890% ( 2) 00:07:34.152 4612.726 - 4637.932: 0.4020% ( 2) 00:07:34.152 4637.932 - 4663.138: 0.4149% ( 2) 00:07:34.152 5797.415 - 5822.622: 0.4214% ( 1) 00:07:34.152 5873.034 - 5898.240: 0.4344% ( 2) 00:07:34.152 5898.240 - 5923.446: 0.4603% ( 4) 00:07:34.152 5923.446 - 5948.652: 0.5122% ( 8) 00:07:34.152 5948.652 - 5973.858: 0.5446% ( 5) 00:07:34.152 5973.858 - 5999.065: 0.6354% ( 14) 00:07:34.152 5999.065 - 6024.271: 0.7261% ( 14) 00:07:34.152 6024.271 - 6049.477: 0.8428% ( 18) 00:07:34.152 6049.477 - 6074.683: 1.0114% ( 26) 00:07:34.152 6074.683 - 6099.889: 1.1411% ( 20) 00:07:34.152 6099.889 - 6125.095: 1.3810% ( 37) 00:07:34.152 6125.095 - 6150.302: 1.7051% ( 50) 00:07:34.152 6150.302 - 6175.508: 2.0423% ( 52) 00:07:34.152 6175.508 - 6200.714: 2.2951% ( 39) 00:07:34.152 6200.714 - 6225.920: 2.6582% ( 56) 00:07:34.152 6225.920 - 6251.126: 3.0861% ( 66) 00:07:34.152 6251.126 - 6276.332: 3.6826% ( 92) 00:07:34.152 6276.332 - 6301.538: 4.2855% ( 93) 00:07:34.152 6301.538 - 6326.745: 4.9728% ( 106) 00:07:34.152 6326.745 - 6351.951: 5.6341% ( 102) 00:07:34.152 6351.951 - 6377.157: 6.2500% ( 95) 00:07:34.152 6377.157 - 6402.363: 7.0475% ( 123) 00:07:34.152 6402.363 - 6427.569: 7.8968% ( 131) 00:07:34.152 6427.569 - 6452.775: 9.1805% ( 198) 00:07:34.152 6452.775 - 6503.188: 12.3768% ( 493) 00:07:34.152 6503.188 - 6553.600: 16.9346% ( 703) 00:07:34.152 6553.600 - 6604.012: 23.0744% ( 947) 00:07:34.152 6604.012 - 6654.425: 30.4525% ( 1138) 00:07:34.152 6654.425 - 6704.837: 37.2082% ( 1042) 00:07:34.152 6704.837 - 6755.249: 42.8877% ( 876) 00:07:34.152 6755.249 - 6805.662: 47.2186% ( 668) 00:07:34.152 6805.662 - 6856.074: 50.3760% ( 487) 00:07:34.152 6856.074 - 6906.486: 53.4103% ( 468) 00:07:34.152 6906.486 - 6956.898: 55.3488% ( 299) 00:07:34.152 6956.898 - 7007.311: 56.8724% ( 235) 00:07:34.152 7007.311 - 7057.723: 58.1885% ( 203) 00:07:34.152 7057.723 - 7108.135: 59.3685% ( 182) 00:07:34.152 7108.135 - 7158.548: 60.3410% ( 150) 00:07:34.152 7158.548 - 7208.960: 61.0996% ( 117) 00:07:34.152 7208.960 - 7259.372: 61.5794% ( 74) 00:07:34.152 7259.372 - 7309.785: 62.0462% ( 72) 00:07:34.152 7309.785 - 7360.197: 62.8631% ( 126) 00:07:34.152 7360.197 - 7410.609: 63.5957% ( 113) 00:07:34.152 7410.609 - 7461.022: 64.3802% ( 121) 00:07:34.152 7461.022 - 7511.434: 65.2490% ( 134) 00:07:34.152 7511.434 - 7561.846: 66.1307% ( 136) 00:07:34.152 7561.846 - 7612.258: 66.7985% ( 103) 00:07:34.152 7612.258 - 7662.671: 67.7516% ( 147) 00:07:34.152 7662.671 - 7713.083: 68.7241% ( 150) 00:07:34.152 7713.083 - 7763.495: 69.5799% ( 132) 00:07:34.152 7763.495 - 7813.908: 70.5459% ( 149) 00:07:34.152 7813.908 - 7864.320: 71.1553% ( 94) 00:07:34.152 7864.320 - 7914.732: 71.7842% ( 97) 00:07:34.152 7914.732 - 7965.145: 72.4131% ( 97) 00:07:34.152 7965.145 - 8015.557: 73.0226% ( 94) 00:07:34.152 8015.557 - 8065.969: 73.6450% ( 96) 00:07:34.152 8065.969 - 8116.382: 74.3128% ( 103) 00:07:34.152 8116.382 - 8166.794: 74.9352% ( 96) 00:07:34.152 8166.794 - 8217.206: 75.5381% ( 93) 00:07:34.152 8217.206 - 8267.618: 76.2643% ( 112) 00:07:34.152 8267.618 - 8318.031: 76.9321% ( 103) 00:07:34.152 8318.031 - 8368.443: 77.4118% ( 74) 00:07:34.152 8368.443 - 8418.855: 77.8268% ( 64) 00:07:34.152 8418.855 - 8469.268: 78.2936% ( 72) 00:07:34.152 8469.268 - 8519.680: 78.6761% ( 59) 00:07:34.152 8519.680 - 8570.092: 78.9225% ( 38) 00:07:34.152 8570.092 - 8620.505: 79.1623% ( 37) 00:07:34.152 8620.505 - 8670.917: 79.3568% ( 30) 00:07:34.152 8670.917 - 8721.329: 79.5643% ( 32) 00:07:34.152 8721.329 - 8771.742: 79.7394% ( 27) 00:07:34.152 8771.742 - 8822.154: 79.8755% ( 21) 00:07:34.152 8822.154 - 8872.566: 79.9728% ( 15) 00:07:34.152 8872.566 - 8922.978: 80.0376% ( 10) 00:07:34.152 8922.978 - 8973.391: 80.0895% ( 8) 00:07:34.152 8973.391 - 9023.803: 80.1478% ( 9) 00:07:34.152 9023.803 - 9074.215: 80.2062% ( 9) 00:07:34.152 9074.215 - 9124.628: 80.2710% ( 10) 00:07:34.152 9124.628 - 9175.040: 80.3294% ( 9) 00:07:34.152 9175.040 - 9225.452: 80.4136% ( 13) 00:07:34.152 9225.452 - 9275.865: 80.4914% ( 12) 00:07:34.152 9275.865 - 9326.277: 80.6341% ( 22) 00:07:34.152 9326.277 - 9376.689: 80.8091% ( 27) 00:07:34.152 9376.689 - 9427.102: 80.8675% ( 9) 00:07:34.152 9427.102 - 9477.514: 80.9388% ( 11) 00:07:34.152 9477.514 - 9527.926: 80.9777% ( 6) 00:07:34.152 9527.926 - 9578.338: 81.0490% ( 11) 00:07:34.152 9578.338 - 9628.751: 81.1333% ( 13) 00:07:34.152 9628.751 - 9679.163: 81.2176% ( 13) 00:07:34.152 9679.163 - 9729.575: 81.2824% ( 10) 00:07:34.152 9729.575 - 9779.988: 81.3084% ( 4) 00:07:34.152 9779.988 - 9830.400: 81.3537% ( 7) 00:07:34.152 9830.400 - 9880.812: 81.3926% ( 6) 00:07:34.152 9880.812 - 9931.225: 81.4510% ( 9) 00:07:34.152 9931.225 - 9981.637: 81.6260% ( 27) 00:07:34.152 9981.637 - 10032.049: 81.6714% ( 7) 00:07:34.152 10032.049 - 10082.462: 81.7233% ( 8) 00:07:34.152 10082.462 - 10132.874: 81.7816% ( 9) 00:07:34.152 10132.874 - 10183.286: 81.8270% ( 7) 00:07:34.152 10183.286 - 10233.698: 81.8530% ( 4) 00:07:34.152 10233.698 - 10284.111: 81.8854% ( 5) 00:07:34.152 10284.111 - 10334.523: 81.9372% ( 8) 00:07:34.152 10334.523 - 10384.935: 82.0086% ( 11) 00:07:34.152 10384.935 - 10435.348: 82.0799% ( 11) 00:07:34.152 10435.348 - 10485.760: 82.1901% ( 17) 00:07:34.152 10485.760 - 10536.172: 82.2744% ( 13) 00:07:34.152 10536.172 - 10586.585: 82.3198% ( 7) 00:07:34.152 10586.585 - 10636.997: 82.3522% ( 5) 00:07:34.152 10636.997 - 10687.409: 82.3911% ( 6) 00:07:34.152 10687.409 - 10737.822: 82.4235% ( 5) 00:07:34.152 10737.822 - 10788.234: 82.4624% ( 6) 00:07:34.152 10788.234 - 10838.646: 82.5013% ( 6) 00:07:34.152 10838.646 - 10889.058: 82.5337% ( 5) 00:07:34.152 10889.058 - 10939.471: 82.5402% ( 1) 00:07:34.152 10939.471 - 10989.883: 82.5532% ( 2) 00:07:34.152 10989.883 - 11040.295: 82.5661% ( 2) 00:07:34.152 11040.295 - 11090.708: 82.5726% ( 1) 00:07:34.152 11947.717 - 11998.129: 82.6050% ( 5) 00:07:34.152 11998.129 - 12048.542: 82.6439% ( 6) 00:07:34.152 12048.542 - 12098.954: 82.7088% ( 10) 00:07:34.152 12098.954 - 12149.366: 82.7995% ( 14) 00:07:34.152 12149.366 - 12199.778: 82.8579% ( 9) 00:07:34.152 12199.778 - 12250.191: 82.9357% ( 12) 00:07:34.152 12250.191 - 12300.603: 82.9681% ( 5) 00:07:34.152 12300.603 - 12351.015: 83.0005% ( 5) 00:07:34.152 12351.015 - 12401.428: 83.0654% ( 10) 00:07:34.152 12401.428 - 12451.840: 83.1367% ( 11) 00:07:34.152 12451.840 - 12502.252: 83.2793% ( 22) 00:07:34.152 12502.252 - 12552.665: 83.4414% ( 25) 00:07:34.152 12552.665 - 12603.077: 83.5905% ( 23) 00:07:34.152 12603.077 - 12653.489: 83.7072% ( 18) 00:07:34.152 12653.489 - 12703.902: 83.7785% ( 11) 00:07:34.152 12703.902 - 12754.314: 84.0573% ( 43) 00:07:34.152 12754.314 - 12804.726: 84.0962% ( 6) 00:07:34.152 12804.726 - 12855.138: 84.1221% ( 4) 00:07:34.152 12855.138 - 12905.551: 84.1351% ( 2) 00:07:34.152 12905.551 - 13006.375: 84.1870% ( 8) 00:07:34.152 13006.375 - 13107.200: 84.3231% ( 21) 00:07:34.152 13107.200 - 13208.025: 84.5565% ( 36) 00:07:34.152 13208.025 - 13308.849: 84.7121% ( 24) 00:07:34.152 13308.849 - 13409.674: 84.8029% ( 14) 00:07:34.152 13409.674 - 13510.498: 84.9520% ( 23) 00:07:34.152 13510.498 - 13611.323: 85.1530% ( 31) 00:07:34.152 13611.323 - 13712.148: 85.5550% ( 62) 00:07:34.152 13712.148 - 13812.972: 86.2552% ( 108) 00:07:34.152 13812.972 - 13913.797: 87.1045% ( 131) 00:07:34.152 13913.797 - 14014.622: 87.9279% ( 127) 00:07:34.152 14014.622 - 14115.446: 90.3916% ( 380) 00:07:34.152 14115.446 - 14216.271: 93.1730% ( 429) 00:07:34.152 14216.271 - 14317.095: 94.7809% ( 248) 00:07:34.152 14317.095 - 14417.920: 95.9155% ( 175) 00:07:34.152 14417.920 - 14518.745: 96.9463% ( 159) 00:07:34.152 14518.745 - 14619.569: 97.4780% ( 82) 00:07:34.152 14619.569 - 14720.394: 97.9901% ( 79) 00:07:34.152 14720.394 - 14821.218: 98.3208% ( 51) 00:07:34.152 14821.218 - 14922.043: 98.4959% ( 27) 00:07:34.152 14922.043 - 15022.868: 98.6385% ( 22) 00:07:34.152 15022.868 - 15123.692: 98.7682% ( 20) 00:07:34.152 15123.692 - 15224.517: 98.8849% ( 18) 00:07:34.152 15224.517 - 15325.342: 98.9562% ( 11) 00:07:34.152 15325.342 - 15426.166: 99.0405% ( 13) 00:07:34.152 15426.166 - 15526.991: 99.0858% ( 7) 00:07:34.152 15526.991 - 15627.815: 99.1053% ( 3) 00:07:34.153 15627.815 - 15728.640: 99.1247% ( 3) 00:07:34.153 15728.640 - 15829.465: 99.1442% ( 3) 00:07:34.153 15829.465 - 15930.289: 99.1636% ( 3) 00:07:34.153 15930.289 - 16031.114: 99.1701% ( 1) 00:07:34.153 17341.834 - 17442.658: 99.1896% ( 3) 00:07:34.153 17442.658 - 17543.483: 99.2155% ( 4) 00:07:34.153 17543.483 - 17644.308: 99.2479% ( 5) 00:07:34.153 17644.308 - 17745.132: 99.2739% ( 4) 00:07:34.153 17745.132 - 17845.957: 99.3063% ( 5) 00:07:34.153 17845.957 - 17946.782: 99.3322% ( 4) 00:07:34.153 17946.782 - 18047.606: 99.3646% ( 5) 00:07:34.153 18047.606 - 18148.431: 99.3841% ( 3) 00:07:34.153 18148.431 - 18249.255: 99.4165% ( 5) 00:07:34.153 18249.255 - 18350.080: 99.4489% ( 5) 00:07:34.153 18350.080 - 18450.905: 99.4748% ( 4) 00:07:34.153 18450.905 - 18551.729: 99.4943% ( 3) 00:07:34.153 18551.729 - 18652.554: 99.5137% ( 3) 00:07:34.153 18652.554 - 18753.378: 99.5332% ( 3) 00:07:34.153 18753.378 - 18854.203: 99.5462% ( 2) 00:07:34.153 18854.203 - 18955.028: 99.5721% ( 4) 00:07:34.153 18955.028 - 19055.852: 99.5851% ( 2) 00:07:34.153 24802.855 - 24903.680: 99.5915% ( 1) 00:07:34.153 24903.680 - 25004.505: 99.5980% ( 1) 00:07:34.153 25004.505 - 25105.329: 99.6045% ( 1) 00:07:34.153 25105.329 - 25206.154: 99.6110% ( 1) 00:07:34.153 25206.154 - 25306.978: 99.6240% ( 2) 00:07:34.153 25306.978 - 25407.803: 99.6304% ( 1) 00:07:34.153 25407.803 - 25508.628: 99.6434% ( 2) 00:07:34.153 25508.628 - 25609.452: 99.6693% ( 4) 00:07:34.153 25609.452 - 25710.277: 99.7666% ( 15) 00:07:34.153 25710.277 - 25811.102: 99.8055% ( 6) 00:07:34.153 25811.102 - 26012.751: 99.8509% ( 7) 00:07:34.153 26012.751 - 26214.400: 99.8963% ( 7) 00:07:34.153 26214.400 - 26416.049: 99.9416% ( 7) 00:07:34.153 26416.049 - 26617.698: 100.0000% ( 9) 00:07:34.153 00:07:34.153 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:34.153 ============================================================================== 00:07:34.153 Range in us Cumulative IO count 00:07:34.153 3705.305 - 3730.511: 0.0259% ( 4) 00:07:34.153 3730.511 - 3755.717: 0.0648% ( 6) 00:07:34.153 3755.717 - 3780.923: 0.0908% ( 4) 00:07:34.153 3780.923 - 3806.129: 0.1232% ( 5) 00:07:34.153 3806.129 - 3831.335: 0.1815% ( 9) 00:07:34.153 3831.335 - 3856.542: 0.2204% ( 6) 00:07:34.153 3856.542 - 3881.748: 0.2464% ( 4) 00:07:34.153 3881.748 - 3906.954: 0.2593% ( 2) 00:07:34.153 3906.954 - 3932.160: 0.2723% ( 2) 00:07:34.153 3932.160 - 3957.366: 0.2788% ( 1) 00:07:34.153 3957.366 - 3982.572: 0.2918% ( 2) 00:07:34.153 3982.572 - 4007.778: 0.3047% ( 2) 00:07:34.153 4007.778 - 4032.985: 0.3177% ( 2) 00:07:34.153 4032.985 - 4058.191: 0.3242% ( 1) 00:07:34.153 4058.191 - 4083.397: 0.3371% ( 2) 00:07:34.153 4083.397 - 4108.603: 0.3501% ( 2) 00:07:34.153 4108.603 - 4133.809: 0.3696% ( 3) 00:07:34.153 4133.809 - 4159.015: 0.3760% ( 1) 00:07:34.153 4159.015 - 4184.222: 0.3890% ( 2) 00:07:34.153 4184.222 - 4209.428: 0.4020% ( 2) 00:07:34.153 4209.428 - 4234.634: 0.4149% ( 2) 00:07:34.153 5721.797 - 5747.003: 0.4214% ( 1) 00:07:34.153 5797.415 - 5822.622: 0.4279% ( 1) 00:07:34.153 5898.240 - 5923.446: 0.4344% ( 1) 00:07:34.153 5923.446 - 5948.652: 0.4538% ( 3) 00:07:34.153 5948.652 - 5973.858: 0.5057% ( 8) 00:07:34.153 5973.858 - 5999.065: 0.5835% ( 12) 00:07:34.153 5999.065 - 6024.271: 0.6678% ( 13) 00:07:34.153 6024.271 - 6049.477: 0.7586% ( 14) 00:07:34.153 6049.477 - 6074.683: 0.8947% ( 21) 00:07:34.153 6074.683 - 6099.889: 1.0568% ( 25) 00:07:34.153 6099.889 - 6125.095: 1.3032% ( 38) 00:07:34.153 6125.095 - 6150.302: 1.9191% ( 95) 00:07:34.153 6150.302 - 6175.508: 2.2108% ( 45) 00:07:34.153 6175.508 - 6200.714: 2.6387% ( 66) 00:07:34.153 6200.714 - 6225.920: 3.0342% ( 61) 00:07:34.153 6225.920 - 6251.126: 3.3390% ( 47) 00:07:34.153 6251.126 - 6276.332: 3.7020% ( 56) 00:07:34.153 6276.332 - 6301.538: 4.4411% ( 114) 00:07:34.153 6301.538 - 6326.745: 5.2451% ( 124) 00:07:34.153 6326.745 - 6351.951: 6.1074% ( 133) 00:07:34.153 6351.951 - 6377.157: 6.6131% ( 78) 00:07:34.153 6377.157 - 6402.363: 7.2160% ( 93) 00:07:34.153 6402.363 - 6427.569: 8.2469% ( 159) 00:07:34.153 6427.569 - 6452.775: 9.4009% ( 178) 00:07:34.153 6452.775 - 6503.188: 11.9813% ( 398) 00:07:34.153 6503.188 - 6553.600: 16.5003% ( 697) 00:07:34.153 6553.600 - 6604.012: 23.0420% ( 1009) 00:07:34.153 6604.012 - 6654.425: 29.9339% ( 1063) 00:07:34.153 6654.425 - 6704.837: 37.3249% ( 1140) 00:07:34.153 6704.837 - 6755.249: 43.8213% ( 1002) 00:07:34.153 6755.249 - 6805.662: 47.6400% ( 589) 00:07:34.153 6805.662 - 6856.074: 51.0503% ( 526) 00:07:34.153 6856.074 - 6906.486: 53.2676% ( 342) 00:07:34.153 6906.486 - 6956.898: 55.2710% ( 309) 00:07:34.153 6956.898 - 7007.311: 56.6909% ( 219) 00:07:34.153 7007.311 - 7057.723: 57.8060% ( 172) 00:07:34.153 7057.723 - 7108.135: 58.8304% ( 158) 00:07:34.153 7108.135 - 7158.548: 59.5436% ( 110) 00:07:34.153 7158.548 - 7208.960: 60.6457% ( 170) 00:07:34.153 7208.960 - 7259.372: 61.6247% ( 151) 00:07:34.153 7259.372 - 7309.785: 62.6297% ( 155) 00:07:34.153 7309.785 - 7360.197: 63.4920% ( 133) 00:07:34.153 7360.197 - 7410.609: 63.8550% ( 56) 00:07:34.153 7410.609 - 7461.022: 64.3802% ( 81) 00:07:34.153 7461.022 - 7511.434: 65.1063% ( 112) 00:07:34.153 7511.434 - 7561.846: 65.9492% ( 130) 00:07:34.153 7561.846 - 7612.258: 66.6818% ( 113) 00:07:34.153 7612.258 - 7662.671: 67.8683% ( 183) 00:07:34.153 7662.671 - 7713.083: 68.5360% ( 103) 00:07:34.153 7713.083 - 7763.495: 69.3659% ( 128) 00:07:34.153 7763.495 - 7813.908: 70.1828% ( 126) 00:07:34.153 7813.908 - 7864.320: 70.8247% ( 99) 00:07:34.153 7864.320 - 7914.732: 71.4730% ( 100) 00:07:34.153 7914.732 - 7965.145: 71.9723% ( 77) 00:07:34.153 7965.145 - 8015.557: 72.6141% ( 99) 00:07:34.153 8015.557 - 8065.969: 73.3013% ( 106) 00:07:34.153 8065.969 - 8116.382: 73.8524% ( 85) 00:07:34.153 8116.382 - 8166.794: 74.5591% ( 109) 00:07:34.153 8166.794 - 8217.206: 75.4668% ( 140) 00:07:34.153 8217.206 - 8267.618: 76.1151% ( 100) 00:07:34.153 8267.618 - 8318.031: 77.0163% ( 139) 00:07:34.153 8318.031 - 8368.443: 77.9305% ( 141) 00:07:34.153 8368.443 - 8418.855: 78.4427% ( 79) 00:07:34.153 8418.855 - 8469.268: 78.9678% ( 81) 00:07:34.153 8469.268 - 8519.680: 79.4411% ( 73) 00:07:34.153 8519.680 - 8570.092: 79.7199% ( 43) 00:07:34.153 8570.092 - 8620.505: 79.8755% ( 24) 00:07:34.153 8620.505 - 8670.917: 79.9987% ( 19) 00:07:34.153 8670.917 - 8721.329: 80.0895% ( 14) 00:07:34.153 8721.329 - 8771.742: 80.1738% ( 13) 00:07:34.153 8771.742 - 8822.154: 80.2645% ( 14) 00:07:34.153 8822.154 - 8872.566: 80.3229% ( 9) 00:07:34.153 8872.566 - 8922.978: 80.3812% ( 9) 00:07:34.153 8922.978 - 8973.391: 80.4396% ( 9) 00:07:34.153 8973.391 - 9023.803: 80.4720% ( 5) 00:07:34.153 9023.803 - 9074.215: 80.4914% ( 3) 00:07:34.153 9074.215 - 9124.628: 80.4979% ( 1) 00:07:34.153 9275.865 - 9326.277: 80.5044% ( 1) 00:07:34.153 9427.102 - 9477.514: 80.5757% ( 11) 00:07:34.153 9477.514 - 9527.926: 80.6989% ( 19) 00:07:34.153 9527.926 - 9578.338: 80.8351% ( 21) 00:07:34.153 9578.338 - 9628.751: 80.8804% ( 7) 00:07:34.153 9628.751 - 9679.163: 80.9193% ( 6) 00:07:34.153 9679.163 - 9729.575: 80.9842% ( 10) 00:07:34.153 9729.575 - 9779.988: 81.0425% ( 9) 00:07:34.153 9779.988 - 9830.400: 81.1268% ( 13) 00:07:34.153 9830.400 - 9880.812: 81.2370% ( 17) 00:07:34.153 9880.812 - 9931.225: 81.3602% ( 19) 00:07:34.153 9931.225 - 9981.637: 81.6390% ( 43) 00:07:34.153 9981.637 - 10032.049: 81.8465% ( 32) 00:07:34.153 10032.049 - 10082.462: 81.9826% ( 21) 00:07:34.153 10082.462 - 10132.874: 82.1123% ( 20) 00:07:34.153 10132.874 - 10183.286: 82.2095% ( 15) 00:07:34.153 10183.286 - 10233.698: 82.3392% ( 20) 00:07:34.153 10233.698 - 10284.111: 82.3846% ( 7) 00:07:34.153 10284.111 - 10334.523: 82.4170% ( 5) 00:07:34.153 10334.523 - 10384.935: 82.4559% ( 6) 00:07:34.153 10384.935 - 10435.348: 82.4818% ( 4) 00:07:34.153 10435.348 - 10485.760: 82.5207% ( 6) 00:07:34.153 10485.760 - 10536.172: 82.5532% ( 5) 00:07:34.153 10536.172 - 10586.585: 82.5726% ( 3) 00:07:34.153 11090.708 - 11141.120: 82.5791% ( 1) 00:07:34.153 11393.182 - 11443.594: 82.6050% ( 4) 00:07:34.153 11443.594 - 11494.006: 82.6374% ( 5) 00:07:34.153 11494.006 - 11544.418: 82.6569% ( 3) 00:07:34.153 11544.418 - 11594.831: 82.7088% ( 8) 00:07:34.153 11594.831 - 11645.243: 82.7801% ( 11) 00:07:34.153 11645.243 - 11695.655: 82.8449% ( 10) 00:07:34.153 11695.655 - 11746.068: 82.8644% ( 3) 00:07:34.153 11746.068 - 11796.480: 82.8773% ( 2) 00:07:34.153 11796.480 - 11846.892: 82.8903% ( 2) 00:07:34.153 11846.892 - 11897.305: 82.9033% ( 2) 00:07:34.153 11897.305 - 11947.717: 82.9098% ( 1) 00:07:34.153 11947.717 - 11998.129: 82.9292% ( 3) 00:07:34.153 11998.129 - 12048.542: 82.9357% ( 1) 00:07:34.153 12048.542 - 12098.954: 82.9487% ( 2) 00:07:34.153 12098.954 - 12149.366: 82.9616% ( 2) 00:07:34.153 12149.366 - 12199.778: 82.9746% ( 2) 00:07:34.153 12199.778 - 12250.191: 82.9876% ( 2) 00:07:34.153 12250.191 - 12300.603: 82.9940% ( 1) 00:07:34.153 12300.603 - 12351.015: 83.0200% ( 4) 00:07:34.153 12351.015 - 12401.428: 83.0459% ( 4) 00:07:34.153 12401.428 - 12451.840: 83.0978% ( 8) 00:07:34.153 12451.840 - 12502.252: 83.1561% ( 9) 00:07:34.154 12502.252 - 12552.665: 83.2210% ( 10) 00:07:34.154 12552.665 - 12603.077: 83.2988% ( 12) 00:07:34.154 12603.077 - 12653.489: 83.5322% ( 36) 00:07:34.154 12653.489 - 12703.902: 83.7850% ( 39) 00:07:34.154 12703.902 - 12754.314: 83.9341% ( 23) 00:07:34.154 12754.314 - 12804.726: 84.0054% ( 11) 00:07:34.154 12804.726 - 12855.138: 84.0832% ( 12) 00:07:34.154 12855.138 - 12905.551: 84.1221% ( 6) 00:07:34.154 12905.551 - 13006.375: 84.2518% ( 20) 00:07:34.154 13006.375 - 13107.200: 84.3945% ( 22) 00:07:34.154 13107.200 - 13208.025: 84.5501% ( 24) 00:07:34.154 13208.025 - 13308.849: 84.6279% ( 12) 00:07:34.154 13308.849 - 13409.674: 84.7964% ( 26) 00:07:34.154 13409.674 - 13510.498: 84.9585% ( 25) 00:07:34.154 13510.498 - 13611.323: 85.0947% ( 21) 00:07:34.154 13611.323 - 13712.148: 85.5744% ( 74) 00:07:34.154 13712.148 - 13812.972: 86.2552% ( 105) 00:07:34.154 13812.972 - 13913.797: 87.1823% ( 143) 00:07:34.154 13913.797 - 14014.622: 88.2586% ( 166) 00:07:34.154 14014.622 - 14115.446: 90.3203% ( 318) 00:07:34.154 14115.446 - 14216.271: 93.2248% ( 448) 00:07:34.154 14216.271 - 14317.095: 94.8522% ( 251) 00:07:34.154 14317.095 - 14417.920: 95.9090% ( 163) 00:07:34.154 14417.920 - 14518.745: 96.8750% ( 149) 00:07:34.154 14518.745 - 14619.569: 97.4974% ( 96) 00:07:34.154 14619.569 - 14720.394: 97.9837% ( 75) 00:07:34.154 14720.394 - 14821.218: 98.3402% ( 55) 00:07:34.154 14821.218 - 14922.043: 98.5672% ( 35) 00:07:34.154 14922.043 - 15022.868: 98.7163% ( 23) 00:07:34.154 15022.868 - 15123.692: 98.8265% ( 17) 00:07:34.154 15123.692 - 15224.517: 98.9173% ( 14) 00:07:34.154 15224.517 - 15325.342: 98.9951% ( 12) 00:07:34.154 15325.342 - 15426.166: 99.0534% ( 9) 00:07:34.154 15426.166 - 15526.991: 99.0858% ( 5) 00:07:34.154 15526.991 - 15627.815: 99.1053% ( 3) 00:07:34.154 15627.815 - 15728.640: 99.1247% ( 3) 00:07:34.154 15728.640 - 15829.465: 99.1377% ( 2) 00:07:34.154 15829.465 - 15930.289: 99.1572% ( 3) 00:07:34.154 15930.289 - 16031.114: 99.1701% ( 2) 00:07:34.154 17644.308 - 17745.132: 99.1961% ( 4) 00:07:34.154 17745.132 - 17845.957: 99.2220% ( 4) 00:07:34.154 17845.957 - 17946.782: 99.2674% ( 7) 00:07:34.154 17946.782 - 18047.606: 99.2998% ( 5) 00:07:34.154 18047.606 - 18148.431: 99.3257% ( 4) 00:07:34.154 18148.431 - 18249.255: 99.3646% ( 6) 00:07:34.154 18249.255 - 18350.080: 99.3970% ( 5) 00:07:34.154 18350.080 - 18450.905: 99.4295% ( 5) 00:07:34.154 18450.905 - 18551.729: 99.4684% ( 6) 00:07:34.154 18551.729 - 18652.554: 99.4878% ( 3) 00:07:34.154 18652.554 - 18753.378: 99.5008% ( 2) 00:07:34.154 18753.378 - 18854.203: 99.5267% ( 4) 00:07:34.154 18854.203 - 18955.028: 99.5462% ( 3) 00:07:34.154 18955.028 - 19055.852: 99.5591% ( 2) 00:07:34.154 19055.852 - 19156.677: 99.5851% ( 4) 00:07:34.154 25105.329 - 25206.154: 99.5980% ( 2) 00:07:34.154 25206.154 - 25306.978: 99.6175% ( 3) 00:07:34.154 25306.978 - 25407.803: 99.6304% ( 2) 00:07:34.154 25407.803 - 25508.628: 99.6499% ( 3) 00:07:34.154 25508.628 - 25609.452: 99.6758% ( 4) 00:07:34.154 25609.452 - 25710.277: 99.7212% ( 7) 00:07:34.154 25710.277 - 25811.102: 99.8055% ( 13) 00:07:34.154 25811.102 - 26012.751: 99.9092% ( 16) 00:07:34.154 26012.751 - 26214.400: 99.9481% ( 6) 00:07:34.154 26214.400 - 26416.049: 99.9870% ( 6) 00:07:34.154 26416.049 - 26617.698: 100.0000% ( 2) 00:07:34.154 00:07:34.154 23:20:20 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:34.154 00:07:34.154 real 0m2.477s 00:07:34.154 user 0m2.168s 00:07:34.154 sys 0m0.199s 00:07:34.154 23:20:20 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.154 23:20:20 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.154 ************************************ 00:07:34.154 END TEST nvme_perf 00:07:34.154 ************************************ 00:07:34.154 23:20:20 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:34.154 23:20:20 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:34.154 23:20:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.154 23:20:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.154 ************************************ 00:07:34.154 START TEST nvme_hello_world 00:07:34.154 ************************************ 00:07:34.154 23:20:20 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:34.412 Initializing NVMe Controllers 00:07:34.412 Attached to 0000:00:10.0 00:07:34.412 Namespace ID: 1 size: 6GB 00:07:34.412 Attached to 0000:00:11.0 00:07:34.412 Namespace ID: 1 size: 5GB 00:07:34.412 Attached to 0000:00:13.0 00:07:34.412 Namespace ID: 1 size: 1GB 00:07:34.412 Attached to 0000:00:12.0 00:07:34.412 Namespace ID: 1 size: 4GB 00:07:34.412 Namespace ID: 2 size: 4GB 00:07:34.412 Namespace ID: 3 size: 4GB 00:07:34.412 Initialization complete. 00:07:34.412 INFO: using host memory buffer for IO 00:07:34.412 Hello world! 00:07:34.412 INFO: using host memory buffer for IO 00:07:34.412 Hello world! 00:07:34.412 INFO: using host memory buffer for IO 00:07:34.412 Hello world! 00:07:34.412 INFO: using host memory buffer for IO 00:07:34.412 Hello world! 00:07:34.412 INFO: using host memory buffer for IO 00:07:34.412 Hello world! 00:07:34.412 INFO: using host memory buffer for IO 00:07:34.412 Hello world! 00:07:34.412 00:07:34.412 real 0m0.193s 00:07:34.412 user 0m0.059s 00:07:34.412 sys 0m0.093s 00:07:34.412 23:20:20 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.412 23:20:20 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:34.412 ************************************ 00:07:34.412 END TEST nvme_hello_world 00:07:34.412 ************************************ 00:07:34.412 23:20:20 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:34.412 23:20:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.412 23:20:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.412 23:20:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.412 ************************************ 00:07:34.412 START TEST nvme_sgl 00:07:34.412 ************************************ 00:07:34.412 23:20:20 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:34.670 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:34.670 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:34.670 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:34.670 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:34.670 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:34.670 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:34.670 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:34.670 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:34.670 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:34.670 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:34.670 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:34.670 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:34.670 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:34.670 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:34.670 NVMe Readv/Writev Request test 00:07:34.670 Attached to 0000:00:10.0 00:07:34.670 Attached to 0000:00:11.0 00:07:34.670 Attached to 0000:00:13.0 00:07:34.670 Attached to 0000:00:12.0 00:07:34.670 0000:00:10.0: build_io_request_2 test passed 00:07:34.670 0000:00:10.0: build_io_request_4 test passed 00:07:34.670 0000:00:10.0: build_io_request_5 test passed 00:07:34.670 0000:00:10.0: build_io_request_6 test passed 00:07:34.670 0000:00:10.0: build_io_request_7 test passed 00:07:34.670 0000:00:10.0: build_io_request_10 test passed 00:07:34.670 0000:00:11.0: build_io_request_2 test passed 00:07:34.670 0000:00:11.0: build_io_request_4 test passed 00:07:34.670 0000:00:11.0: build_io_request_5 test passed 00:07:34.670 0000:00:11.0: build_io_request_6 test passed 00:07:34.670 0000:00:11.0: build_io_request_7 test passed 00:07:34.670 0000:00:11.0: build_io_request_10 test passed 00:07:34.670 Cleaning up... 00:07:34.670 00:07:34.670 real 0m0.259s 00:07:34.670 user 0m0.125s 00:07:34.670 sys 0m0.088s 00:07:34.670 23:20:20 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.670 23:20:20 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:34.670 ************************************ 00:07:34.670 END TEST nvme_sgl 00:07:34.670 ************************************ 00:07:34.670 23:20:20 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:34.670 23:20:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.670 23:20:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.670 23:20:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.670 ************************************ 00:07:34.670 START TEST nvme_e2edp 00:07:34.670 ************************************ 00:07:34.670 23:20:20 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:34.929 NVMe Write/Read with End-to-End data protection test 00:07:34.929 Attached to 0000:00:10.0 00:07:34.929 Attached to 0000:00:11.0 00:07:34.929 Attached to 0000:00:13.0 00:07:34.929 Attached to 0000:00:12.0 00:07:34.929 Cleaning up... 00:07:34.929 00:07:34.929 real 0m0.192s 00:07:34.929 user 0m0.065s 00:07:34.929 sys 0m0.081s 00:07:34.929 23:20:20 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.929 23:20:20 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:34.929 ************************************ 00:07:34.929 END TEST nvme_e2edp 00:07:34.929 ************************************ 00:07:34.929 23:20:21 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:34.929 23:20:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.929 23:20:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.929 23:20:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.929 ************************************ 00:07:34.929 START TEST nvme_reserve 00:07:34.929 ************************************ 00:07:34.929 23:20:21 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:35.187 ===================================================== 00:07:35.187 NVMe Controller at PCI bus 0, device 16, function 0 00:07:35.187 ===================================================== 00:07:35.187 Reservations: Not Supported 00:07:35.187 ===================================================== 00:07:35.187 NVMe Controller at PCI bus 0, device 17, function 0 00:07:35.187 ===================================================== 00:07:35.187 Reservations: Not Supported 00:07:35.187 ===================================================== 00:07:35.187 NVMe Controller at PCI bus 0, device 19, function 0 00:07:35.187 ===================================================== 00:07:35.187 Reservations: Not Supported 00:07:35.187 ===================================================== 00:07:35.187 NVMe Controller at PCI bus 0, device 18, function 0 00:07:35.187 ===================================================== 00:07:35.187 Reservations: Not Supported 00:07:35.187 Reservation test passed 00:07:35.187 00:07:35.187 real 0m0.187s 00:07:35.187 user 0m0.071s 00:07:35.187 sys 0m0.071s 00:07:35.187 23:20:21 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.187 23:20:21 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:35.187 ************************************ 00:07:35.187 END TEST nvme_reserve 00:07:35.187 ************************************ 00:07:35.187 23:20:21 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:35.187 23:20:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.187 23:20:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.187 23:20:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.187 ************************************ 00:07:35.187 START TEST nvme_err_injection 00:07:35.187 ************************************ 00:07:35.187 23:20:21 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:35.445 NVMe Error Injection test 00:07:35.445 Attached to 0000:00:10.0 00:07:35.445 Attached to 0000:00:11.0 00:07:35.445 Attached to 0000:00:13.0 00:07:35.445 Attached to 0000:00:12.0 00:07:35.445 0000:00:10.0: get features failed as expected 00:07:35.445 0000:00:11.0: get features failed as expected 00:07:35.445 0000:00:13.0: get features failed as expected 00:07:35.445 0000:00:12.0: get features failed as expected 00:07:35.445 0000:00:10.0: get features successfully as expected 00:07:35.445 0000:00:11.0: get features successfully as expected 00:07:35.445 0000:00:13.0: get features successfully as expected 00:07:35.445 0000:00:12.0: get features successfully as expected 00:07:35.445 0000:00:11.0: read failed as expected 00:07:35.445 0000:00:10.0: read failed as expected 00:07:35.445 0000:00:13.0: read failed as expected 00:07:35.445 0000:00:12.0: read failed as expected 00:07:35.445 0000:00:11.0: read successfully as expected 00:07:35.445 0000:00:13.0: read successfully as expected 00:07:35.445 0000:00:12.0: read successfully as expected 00:07:35.445 0000:00:10.0: read successfully as expected 00:07:35.445 Cleaning up... 00:07:35.445 00:07:35.445 real 0m0.206s 00:07:35.445 user 0m0.074s 00:07:35.445 sys 0m0.085s 00:07:35.445 23:20:21 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.445 23:20:21 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 ************************************ 00:07:35.445 END TEST nvme_err_injection 00:07:35.445 ************************************ 00:07:35.445 23:20:21 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:35.445 23:20:21 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:35.445 23:20:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.445 23:20:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.445 ************************************ 00:07:35.445 START TEST nvme_overhead 00:07:35.445 ************************************ 00:07:35.445 23:20:21 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:36.815 Initializing NVMe Controllers 00:07:36.815 Attached to 0000:00:10.0 00:07:36.815 Attached to 0000:00:11.0 00:07:36.815 Attached to 0000:00:13.0 00:07:36.815 Attached to 0000:00:12.0 00:07:36.815 Initialization complete. Launching workers. 00:07:36.815 submit (in ns) avg, min, max = 11495.1, 10296.9, 235823.8 00:07:36.815 complete (in ns) avg, min, max = 7622.5, 7225.4, 478337.7 00:07:36.815 00:07:36.815 Submit histogram 00:07:36.815 ================ 00:07:36.815 Range in us Cumulative Count 00:07:36.815 10.289 - 10.338: 0.0056% ( 1) 00:07:36.815 10.388 - 10.437: 0.0169% ( 2) 00:07:36.815 10.880 - 10.929: 0.0394% ( 4) 00:07:36.815 10.929 - 10.978: 0.2812% ( 43) 00:07:36.815 10.978 - 11.028: 2.4069% ( 378) 00:07:36.815 11.028 - 11.077: 9.6671% ( 1291) 00:07:36.815 11.077 - 11.126: 24.5079% ( 2639) 00:07:36.815 11.126 - 11.175: 43.7802% ( 3427) 00:07:36.815 11.175 - 11.225: 59.8583% ( 2859) 00:07:36.815 11.225 - 11.274: 70.3014% ( 1857) 00:07:36.815 11.274 - 11.323: 76.2569% ( 1059) 00:07:36.815 11.323 - 11.372: 79.9910% ( 664) 00:07:36.815 11.372 - 11.422: 82.2405% ( 400) 00:07:36.815 11.422 - 11.471: 83.7082% ( 261) 00:07:36.815 11.471 - 11.520: 84.6586% ( 169) 00:07:36.815 11.520 - 11.569: 85.3672% ( 126) 00:07:36.815 11.569 - 11.618: 85.9802% ( 109) 00:07:36.815 11.618 - 11.668: 86.5088% ( 94) 00:07:36.815 11.668 - 11.717: 86.9194% ( 73) 00:07:36.815 11.717 - 11.766: 87.3130% ( 70) 00:07:36.815 11.766 - 11.815: 87.7179% ( 72) 00:07:36.816 11.815 - 11.865: 88.2634% ( 97) 00:07:36.816 11.865 - 11.914: 88.9270% ( 118) 00:07:36.816 11.914 - 11.963: 89.7762% ( 151) 00:07:36.816 11.963 - 12.012: 90.8728% ( 195) 00:07:36.816 12.012 - 12.062: 91.8344% ( 171) 00:07:36.816 12.062 - 12.111: 92.7455% ( 162) 00:07:36.816 12.111 - 12.160: 93.4934% ( 133) 00:07:36.816 12.160 - 12.209: 94.2920% ( 142) 00:07:36.816 12.209 - 12.258: 94.9893% ( 124) 00:07:36.816 12.258 - 12.308: 95.3886% ( 71) 00:07:36.816 12.308 - 12.357: 95.6360% ( 44) 00:07:36.816 12.357 - 12.406: 95.9228% ( 51) 00:07:36.816 12.406 - 12.455: 96.0859% ( 29) 00:07:36.816 12.455 - 12.505: 96.1984% ( 20) 00:07:36.816 12.505 - 12.554: 96.2659% ( 12) 00:07:36.816 12.554 - 12.603: 96.3896% ( 22) 00:07:36.816 12.603 - 12.702: 96.4290% ( 7) 00:07:36.816 12.702 - 12.800: 96.4458% ( 3) 00:07:36.816 12.800 - 12.898: 96.4796% ( 6) 00:07:36.816 12.898 - 12.997: 96.5414% ( 11) 00:07:36.816 12.997 - 13.095: 96.6820% ( 25) 00:07:36.816 13.095 - 13.194: 96.8564% ( 31) 00:07:36.816 13.194 - 13.292: 97.0082% ( 27) 00:07:36.816 13.292 - 13.391: 97.1544% ( 26) 00:07:36.816 13.391 - 13.489: 97.2950% ( 25) 00:07:36.816 13.489 - 13.588: 97.4187% ( 22) 00:07:36.816 13.588 - 13.686: 97.5087% ( 16) 00:07:36.816 13.686 - 13.785: 97.5762% ( 12) 00:07:36.816 13.785 - 13.883: 97.6381% ( 11) 00:07:36.816 13.883 - 13.982: 97.6887% ( 9) 00:07:36.816 13.982 - 14.080: 97.7280% ( 7) 00:07:36.816 14.080 - 14.178: 97.7505% ( 4) 00:07:36.816 14.178 - 14.277: 97.7674% ( 3) 00:07:36.816 14.277 - 14.375: 97.8068% ( 7) 00:07:36.816 14.375 - 14.474: 97.8518% ( 8) 00:07:36.816 14.474 - 14.572: 97.8967% ( 8) 00:07:36.816 14.572 - 14.671: 97.9080% ( 2) 00:07:36.816 14.671 - 14.769: 97.9361% ( 5) 00:07:36.816 14.769 - 14.868: 97.9642% ( 5) 00:07:36.816 14.868 - 14.966: 97.9980% ( 6) 00:07:36.816 14.966 - 15.065: 98.0373% ( 7) 00:07:36.816 15.065 - 15.163: 98.0655% ( 5) 00:07:36.816 15.163 - 15.262: 98.1048% ( 7) 00:07:36.816 15.262 - 15.360: 98.1442% ( 7) 00:07:36.816 15.360 - 15.458: 98.1723% ( 5) 00:07:36.816 15.458 - 15.557: 98.2117% ( 7) 00:07:36.816 15.557 - 15.655: 98.2623% ( 9) 00:07:36.816 15.655 - 15.754: 98.2792% ( 3) 00:07:36.816 15.754 - 15.852: 98.2904% ( 2) 00:07:36.816 15.852 - 15.951: 98.2960% ( 1) 00:07:36.816 15.951 - 16.049: 98.3129% ( 3) 00:07:36.816 16.049 - 16.148: 98.3241% ( 2) 00:07:36.816 16.148 - 16.246: 98.3410% ( 3) 00:07:36.816 16.246 - 16.345: 98.3579% ( 3) 00:07:36.816 16.443 - 16.542: 98.3691% ( 2) 00:07:36.816 16.542 - 16.640: 98.4029% ( 6) 00:07:36.816 16.640 - 16.738: 98.4254% ( 4) 00:07:36.816 16.738 - 16.837: 98.4929% ( 12) 00:07:36.816 16.837 - 16.935: 98.5885% ( 17) 00:07:36.816 16.935 - 17.034: 98.6953% ( 19) 00:07:36.816 17.034 - 17.132: 98.7684% ( 13) 00:07:36.816 17.132 - 17.231: 98.8359% ( 12) 00:07:36.816 17.231 - 17.329: 98.9034% ( 12) 00:07:36.816 17.329 - 17.428: 98.9934% ( 16) 00:07:36.816 17.428 - 17.526: 99.0665% ( 13) 00:07:36.816 17.526 - 17.625: 99.1396% ( 13) 00:07:36.816 17.625 - 17.723: 99.2183% ( 14) 00:07:36.816 17.723 - 17.822: 99.2464% ( 5) 00:07:36.816 17.822 - 17.920: 99.2970% ( 9) 00:07:36.816 17.920 - 18.018: 99.3308% ( 6) 00:07:36.816 18.018 - 18.117: 99.3645% ( 6) 00:07:36.816 18.117 - 18.215: 99.4151% ( 9) 00:07:36.816 18.215 - 18.314: 99.4320% ( 3) 00:07:36.816 18.314 - 18.412: 99.4714% ( 7) 00:07:36.816 18.412 - 18.511: 99.4995% ( 5) 00:07:36.816 18.511 - 18.609: 99.5332% ( 6) 00:07:36.816 18.609 - 18.708: 99.5838% ( 9) 00:07:36.816 18.708 - 18.806: 99.5951% ( 2) 00:07:36.816 18.806 - 18.905: 99.6063% ( 2) 00:07:36.816 18.905 - 19.003: 99.6176% ( 2) 00:07:36.816 19.200 - 19.298: 99.6288% ( 2) 00:07:36.816 19.298 - 19.397: 99.6457% ( 3) 00:07:36.816 19.397 - 19.495: 99.6570% ( 2) 00:07:36.816 19.594 - 19.692: 99.6682% ( 2) 00:07:36.816 19.791 - 19.889: 99.6795% ( 2) 00:07:36.816 19.889 - 19.988: 99.6907% ( 2) 00:07:36.816 19.988 - 20.086: 99.7019% ( 2) 00:07:36.816 20.086 - 20.185: 99.7076% ( 1) 00:07:36.816 20.185 - 20.283: 99.7132% ( 1) 00:07:36.816 20.283 - 20.382: 99.7188% ( 1) 00:07:36.816 20.480 - 20.578: 99.7244% ( 1) 00:07:36.816 20.677 - 20.775: 99.7301% ( 1) 00:07:36.816 20.972 - 21.071: 99.7469% ( 3) 00:07:36.816 21.268 - 21.366: 99.7526% ( 1) 00:07:36.816 21.366 - 21.465: 99.7582% ( 1) 00:07:36.816 21.563 - 21.662: 99.7638% ( 1) 00:07:36.816 21.760 - 21.858: 99.7694% ( 1) 00:07:36.816 21.858 - 21.957: 99.7751% ( 1) 00:07:36.816 22.252 - 22.351: 99.7807% ( 1) 00:07:36.816 22.449 - 22.548: 99.7863% ( 1) 00:07:36.816 22.548 - 22.646: 99.7919% ( 1) 00:07:36.816 22.646 - 22.745: 99.8088% ( 3) 00:07:36.816 22.745 - 22.843: 99.8144% ( 1) 00:07:36.816 22.843 - 22.942: 99.8200% ( 1) 00:07:36.816 23.138 - 23.237: 99.8257% ( 1) 00:07:36.816 23.532 - 23.631: 99.8313% ( 1) 00:07:36.816 23.729 - 23.828: 99.8369% ( 1) 00:07:36.816 23.828 - 23.926: 99.8425% ( 1) 00:07:36.816 24.123 - 24.222: 99.8482% ( 1) 00:07:36.816 24.320 - 24.418: 99.8538% ( 1) 00:07:36.816 24.911 - 25.009: 99.8594% ( 1) 00:07:36.816 25.600 - 25.797: 99.8650% ( 1) 00:07:36.816 26.388 - 26.585: 99.8707% ( 1) 00:07:36.816 27.963 - 28.160: 99.8763% ( 1) 00:07:36.816 32.492 - 32.689: 99.8819% ( 1) 00:07:36.816 34.855 - 35.052: 99.8875% ( 1) 00:07:36.816 35.446 - 35.643: 99.8932% ( 1) 00:07:36.816 36.037 - 36.234: 99.8988% ( 1) 00:07:36.816 38.006 - 38.203: 99.9044% ( 1) 00:07:36.816 41.157 - 41.354: 99.9100% ( 1) 00:07:36.816 41.945 - 42.142: 99.9156% ( 1) 00:07:36.816 42.338 - 42.535: 99.9213% ( 1) 00:07:36.816 43.126 - 43.323: 99.9269% ( 1) 00:07:36.816 43.717 - 43.914: 99.9325% ( 1) 00:07:36.816 44.702 - 44.898: 99.9381% ( 1) 00:07:36.816 45.095 - 45.292: 99.9438% ( 1) 00:07:36.816 46.080 - 46.277: 99.9494% ( 1) 00:07:36.816 46.671 - 46.868: 99.9550% ( 1) 00:07:36.816 49.034 - 49.231: 99.9606% ( 1) 00:07:36.816 55.138 - 55.532: 99.9663% ( 1) 00:07:36.816 60.258 - 60.652: 99.9719% ( 1) 00:07:36.816 61.834 - 62.228: 99.9831% ( 2) 00:07:36.816 76.406 - 76.800: 99.9888% ( 1) 00:07:36.816 77.588 - 77.982: 99.9944% ( 1) 00:07:36.816 234.732 - 236.308: 100.0000% ( 1) 00:07:36.816 00:07:36.816 Complete histogram 00:07:36.816 ================== 00:07:36.816 Range in us Cumulative Count 00:07:36.816 7.188 - 7.237: 0.0169% ( 3) 00:07:36.816 7.237 - 7.286: 0.5511% ( 95) 00:07:36.816 7.286 - 7.335: 6.3547% ( 1032) 00:07:36.816 7.335 - 7.385: 24.2155% ( 3176) 00:07:36.816 7.385 - 7.434: 48.5885% ( 4334) 00:07:36.816 7.434 - 7.483: 68.5075% ( 3542) 00:07:36.816 7.483 - 7.532: 80.1203% ( 2065) 00:07:36.816 7.532 - 7.582: 86.6888% ( 1168) 00:07:36.816 7.582 - 7.631: 90.1642% ( 618) 00:07:36.816 7.631 - 7.680: 91.9019% ( 309) 00:07:36.816 7.680 - 7.729: 92.8973% ( 177) 00:07:36.816 7.729 - 7.778: 93.4316% ( 95) 00:07:36.816 7.778 - 7.828: 93.7577% ( 58) 00:07:36.816 7.828 - 7.877: 93.9264% ( 30) 00:07:36.816 7.877 - 7.926: 94.0445% ( 21) 00:07:36.816 7.926 - 7.975: 94.3763% ( 59) 00:07:36.816 7.975 - 8.025: 94.7756% ( 71) 00:07:36.816 8.025 - 8.074: 95.1355% ( 64) 00:07:36.816 8.074 - 8.123: 95.5573% ( 75) 00:07:36.816 8.123 - 8.172: 96.2209% ( 118) 00:07:36.816 8.172 - 8.222: 96.7945% ( 102) 00:07:36.816 8.222 - 8.271: 97.2500% ( 81) 00:07:36.816 8.271 - 8.320: 97.5593% ( 55) 00:07:36.816 8.320 - 8.369: 97.7449% ( 33) 00:07:36.816 8.369 - 8.418: 97.8799% ( 24) 00:07:36.816 8.418 - 8.468: 97.9417% ( 11) 00:07:36.816 8.468 - 8.517: 98.0092% ( 12) 00:07:36.816 8.517 - 8.566: 98.0542% ( 8) 00:07:36.816 8.566 - 8.615: 98.1104% ( 10) 00:07:36.816 8.615 - 8.665: 98.1329% ( 4) 00:07:36.816 8.665 - 8.714: 98.1442% ( 2) 00:07:36.816 8.714 - 8.763: 98.1498% ( 1) 00:07:36.816 8.763 - 8.812: 98.1611% ( 2) 00:07:36.816 8.812 - 8.862: 98.1723% ( 2) 00:07:36.816 8.862 - 8.911: 98.1892% ( 3) 00:07:36.816 8.911 - 8.960: 98.2004% ( 2) 00:07:36.816 8.960 - 9.009: 98.2117% ( 2) 00:07:36.816 9.009 - 9.058: 98.2173% ( 1) 00:07:36.816 9.108 - 9.157: 98.2229% ( 1) 00:07:36.816 9.305 - 9.354: 98.2285% ( 1) 00:07:36.816 9.502 - 9.551: 98.2398% ( 2) 00:07:36.816 9.551 - 9.600: 98.2454% ( 1) 00:07:36.816 9.698 - 9.748: 98.2623% ( 3) 00:07:36.816 9.748 - 9.797: 98.2679% ( 1) 00:07:36.816 9.797 - 9.846: 98.2735% ( 1) 00:07:36.817 9.846 - 9.895: 98.2848% ( 2) 00:07:36.817 9.945 - 9.994: 98.2960% ( 2) 00:07:36.817 9.994 - 10.043: 98.3129% ( 3) 00:07:36.817 10.043 - 10.092: 98.3354% ( 4) 00:07:36.817 10.092 - 10.142: 98.3410% ( 1) 00:07:36.817 10.142 - 10.191: 98.3635% ( 4) 00:07:36.817 10.191 - 10.240: 98.3748% ( 2) 00:07:36.817 10.240 - 10.289: 98.3916% ( 3) 00:07:36.817 10.289 - 10.338: 98.3973% ( 1) 00:07:36.817 10.338 - 10.388: 98.4085% ( 2) 00:07:36.817 10.388 - 10.437: 98.4198% ( 2) 00:07:36.817 10.437 - 10.486: 98.4254% ( 1) 00:07:36.817 10.486 - 10.535: 98.4422% ( 3) 00:07:36.817 10.535 - 10.585: 98.4479% ( 1) 00:07:36.817 10.585 - 10.634: 98.4591% ( 2) 00:07:36.817 10.634 - 10.683: 98.4704% ( 2) 00:07:36.817 10.732 - 10.782: 98.4760% ( 1) 00:07:36.817 10.782 - 10.831: 98.4872% ( 2) 00:07:36.817 10.831 - 10.880: 98.4929% ( 1) 00:07:36.817 10.880 - 10.929: 98.5041% ( 2) 00:07:36.817 10.978 - 11.028: 98.5097% ( 1) 00:07:36.817 11.126 - 11.175: 98.5154% ( 1) 00:07:36.817 11.175 - 11.225: 98.5266% ( 2) 00:07:36.817 11.225 - 11.274: 98.5322% ( 1) 00:07:36.817 11.422 - 11.471: 98.5378% ( 1) 00:07:36.817 11.569 - 11.618: 98.5435% ( 1) 00:07:36.817 11.618 - 11.668: 98.5491% ( 1) 00:07:36.817 12.012 - 12.062: 98.5547% ( 1) 00:07:36.817 12.505 - 12.554: 98.5660% ( 2) 00:07:36.817 12.603 - 12.702: 98.5716% ( 1) 00:07:36.817 12.702 - 12.800: 98.5772% ( 1) 00:07:36.817 12.898 - 12.997: 98.6166% ( 7) 00:07:36.817 12.997 - 13.095: 98.6672% ( 9) 00:07:36.817 13.095 - 13.194: 98.7572% ( 16) 00:07:36.817 13.194 - 13.292: 98.7909% ( 6) 00:07:36.817 13.292 - 13.391: 98.8359% ( 8) 00:07:36.817 13.391 - 13.489: 98.8865% ( 9) 00:07:36.817 13.489 - 13.588: 98.9934% ( 19) 00:07:36.817 13.588 - 13.686: 99.0777% ( 15) 00:07:36.817 13.686 - 13.785: 99.1452% ( 12) 00:07:36.817 13.785 - 13.883: 99.2127% ( 12) 00:07:36.817 13.883 - 13.982: 99.2633% ( 9) 00:07:36.817 13.982 - 14.080: 99.3195% ( 10) 00:07:36.817 14.080 - 14.178: 99.3814% ( 11) 00:07:36.817 14.178 - 14.277: 99.4601% ( 14) 00:07:36.817 14.277 - 14.375: 99.5332% ( 13) 00:07:36.817 14.375 - 14.474: 99.5614% ( 5) 00:07:36.817 14.474 - 14.572: 99.6063% ( 8) 00:07:36.817 14.572 - 14.671: 99.6401% ( 6) 00:07:36.817 14.671 - 14.769: 99.6570% ( 3) 00:07:36.817 14.769 - 14.868: 99.6795% ( 4) 00:07:36.817 14.868 - 14.966: 99.7076% ( 5) 00:07:36.817 14.966 - 15.065: 99.7244% ( 3) 00:07:36.817 15.065 - 15.163: 99.7301% ( 1) 00:07:36.817 15.163 - 15.262: 99.7357% ( 1) 00:07:36.817 15.262 - 15.360: 99.7469% ( 2) 00:07:36.817 15.458 - 15.557: 99.7582% ( 2) 00:07:36.817 15.557 - 15.655: 99.7638% ( 1) 00:07:36.817 15.754 - 15.852: 99.7694% ( 1) 00:07:36.817 15.852 - 15.951: 99.7807% ( 2) 00:07:36.817 15.951 - 16.049: 99.7863% ( 1) 00:07:36.817 16.049 - 16.148: 99.7975% ( 2) 00:07:36.817 16.148 - 16.246: 99.8144% ( 3) 00:07:36.817 16.246 - 16.345: 99.8257% ( 2) 00:07:36.817 16.345 - 16.443: 99.8313% ( 1) 00:07:36.817 16.738 - 16.837: 99.8425% ( 2) 00:07:36.817 16.935 - 17.034: 99.8482% ( 1) 00:07:36.817 17.132 - 17.231: 99.8538% ( 1) 00:07:36.817 17.231 - 17.329: 99.8594% ( 1) 00:07:36.817 17.526 - 17.625: 99.8650% ( 1) 00:07:36.817 17.723 - 17.822: 99.8707% ( 1) 00:07:36.817 17.822 - 17.920: 99.8763% ( 1) 00:07:36.817 18.117 - 18.215: 99.8875% ( 2) 00:07:36.817 18.215 - 18.314: 99.8932% ( 1) 00:07:36.817 19.003 - 19.102: 99.8988% ( 1) 00:07:36.817 19.102 - 19.200: 99.9044% ( 1) 00:07:36.817 19.397 - 19.495: 99.9100% ( 1) 00:07:36.817 19.692 - 19.791: 99.9156% ( 1) 00:07:36.817 19.791 - 19.889: 99.9213% ( 1) 00:07:36.817 20.677 - 20.775: 99.9269% ( 1) 00:07:36.817 20.972 - 21.071: 99.9325% ( 1) 00:07:36.817 21.957 - 22.055: 99.9381% ( 1) 00:07:36.817 22.055 - 22.154: 99.9438% ( 1) 00:07:36.817 22.548 - 22.646: 99.9494% ( 1) 00:07:36.817 23.729 - 23.828: 99.9550% ( 1) 00:07:36.817 23.828 - 23.926: 99.9606% ( 1) 00:07:36.817 26.388 - 26.585: 99.9663% ( 1) 00:07:36.817 26.585 - 26.782: 99.9719% ( 1) 00:07:36.817 27.766 - 27.963: 99.9775% ( 1) 00:07:36.817 37.218 - 37.415: 99.9831% ( 1) 00:07:36.817 51.988 - 52.382: 99.9888% ( 1) 00:07:36.817 71.680 - 72.074: 99.9944% ( 1) 00:07:36.817 475.766 - 478.917: 100.0000% ( 1) 00:07:36.817 00:07:36.817 00:07:36.817 real 0m1.197s 00:07:36.817 user 0m1.070s 00:07:36.817 sys 0m0.078s 00:07:36.817 23:20:22 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.817 23:20:22 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:36.817 ************************************ 00:07:36.817 END TEST nvme_overhead 00:07:36.817 ************************************ 00:07:36.817 23:20:22 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:36.817 23:20:22 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:36.817 23:20:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.817 23:20:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.817 ************************************ 00:07:36.817 START TEST nvme_arbitration 00:07:36.817 ************************************ 00:07:36.817 23:20:22 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:40.117 Initializing NVMe Controllers 00:07:40.117 Attached to 0000:00:10.0 00:07:40.117 Attached to 0000:00:11.0 00:07:40.117 Attached to 0000:00:13.0 00:07:40.117 Attached to 0000:00:12.0 00:07:40.117 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:40.117 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:40.117 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:40.117 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:40.117 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:40.117 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:40.117 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:40.117 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:40.117 Initialization complete. Launching workers. 00:07:40.117 Starting thread on core 1 with urgent priority queue 00:07:40.117 Starting thread on core 2 with urgent priority queue 00:07:40.117 Starting thread on core 3 with urgent priority queue 00:07:40.117 Starting thread on core 0 with urgent priority queue 00:07:40.117 QEMU NVMe Ctrl (12340 ) core 0: 6805.33 IO/s 14.69 secs/100000 ios 00:07:40.117 QEMU NVMe Ctrl (12342 ) core 0: 6805.33 IO/s 14.69 secs/100000 ios 00:07:40.117 QEMU NVMe Ctrl (12341 ) core 1: 6805.33 IO/s 14.69 secs/100000 ios 00:07:40.117 QEMU NVMe Ctrl (12342 ) core 1: 6805.33 IO/s 14.69 secs/100000 ios 00:07:40.117 QEMU NVMe Ctrl (12343 ) core 2: 6826.67 IO/s 14.65 secs/100000 ios 00:07:40.117 QEMU NVMe Ctrl (12342 ) core 3: 6912.00 IO/s 14.47 secs/100000 ios 00:07:40.117 ======================================================== 00:07:40.117 00:07:40.117 00:07:40.117 real 0m3.226s 00:07:40.117 user 0m9.045s 00:07:40.117 sys 0m0.097s 00:07:40.117 23:20:25 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.117 23:20:25 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:40.117 ************************************ 00:07:40.117 END TEST nvme_arbitration 00:07:40.117 ************************************ 00:07:40.117 23:20:25 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:40.117 23:20:25 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.117 23:20:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.117 23:20:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.117 ************************************ 00:07:40.117 START TEST nvme_single_aen 00:07:40.117 ************************************ 00:07:40.117 23:20:25 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:40.117 Asynchronous Event Request test 00:07:40.117 Attached to 0000:00:10.0 00:07:40.117 Attached to 0000:00:11.0 00:07:40.117 Attached to 0000:00:13.0 00:07:40.117 Attached to 0000:00:12.0 00:07:40.117 Reset controller to setup AER completions for this process 00:07:40.117 Registering asynchronous event callbacks... 00:07:40.117 Getting orig temperature thresholds of all controllers 00:07:40.117 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:40.117 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:40.117 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:40.117 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:40.117 Setting all controllers temperature threshold low to trigger AER 00:07:40.117 Waiting for all controllers temperature threshold to be set lower 00:07:40.117 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:40.117 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:40.117 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:40.117 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:40.117 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:40.117 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:40.117 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:40.117 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:40.117 Waiting for all controllers to trigger AER and reset threshold 00:07:40.117 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.117 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.117 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.117 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.117 Cleaning up... 00:07:40.117 00:07:40.117 real 0m0.202s 00:07:40.117 user 0m0.070s 00:07:40.117 sys 0m0.092s 00:07:40.117 23:20:26 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.117 23:20:26 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:40.117 ************************************ 00:07:40.117 END TEST nvme_single_aen 00:07:40.117 ************************************ 00:07:40.117 23:20:26 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:40.117 23:20:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.117 23:20:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.117 23:20:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.117 ************************************ 00:07:40.117 START TEST nvme_doorbell_aers 00:07:40.117 ************************************ 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:40.117 23:20:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:40.375 [2024-11-19 23:20:26.444836] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:07:50.361 Executing: test_write_invalid_db 00:07:50.361 Waiting for AER completion... 00:07:50.361 Failure: test_write_invalid_db 00:07:50.361 00:07:50.361 Executing: test_invalid_db_write_overflow_sq 00:07:50.361 Waiting for AER completion... 00:07:50.361 Failure: test_invalid_db_write_overflow_sq 00:07:50.361 00:07:50.361 Executing: test_invalid_db_write_overflow_cq 00:07:50.361 Waiting for AER completion... 00:07:50.361 Failure: test_invalid_db_write_overflow_cq 00:07:50.361 00:07:50.361 23:20:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:50.361 23:20:36 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:50.361 [2024-11-19 23:20:36.483140] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:00.323 Executing: test_write_invalid_db 00:08:00.323 Waiting for AER completion... 00:08:00.324 Failure: test_write_invalid_db 00:08:00.324 00:08:00.324 Executing: test_invalid_db_write_overflow_sq 00:08:00.324 Waiting for AER completion... 00:08:00.324 Failure: test_invalid_db_write_overflow_sq 00:08:00.324 00:08:00.324 Executing: test_invalid_db_write_overflow_cq 00:08:00.324 Waiting for AER completion... 00:08:00.324 Failure: test_invalid_db_write_overflow_cq 00:08:00.324 00:08:00.324 23:20:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:00.324 23:20:46 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:00.582 [2024-11-19 23:20:46.537664] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:10.555 Executing: test_write_invalid_db 00:08:10.555 Waiting for AER completion... 00:08:10.555 Failure: test_write_invalid_db 00:08:10.555 00:08:10.555 Executing: test_invalid_db_write_overflow_sq 00:08:10.555 Waiting for AER completion... 00:08:10.555 Failure: test_invalid_db_write_overflow_sq 00:08:10.555 00:08:10.555 Executing: test_invalid_db_write_overflow_cq 00:08:10.555 Waiting for AER completion... 00:08:10.555 Failure: test_invalid_db_write_overflow_cq 00:08:10.555 00:08:10.555 23:20:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.555 23:20:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:10.555 [2024-11-19 23:20:56.537784] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 Executing: test_write_invalid_db 00:08:20.553 Waiting for AER completion... 00:08:20.553 Failure: test_write_invalid_db 00:08:20.553 00:08:20.553 Executing: test_invalid_db_write_overflow_sq 00:08:20.553 Waiting for AER completion... 00:08:20.553 Failure: test_invalid_db_write_overflow_sq 00:08:20.553 00:08:20.553 Executing: test_invalid_db_write_overflow_cq 00:08:20.553 Waiting for AER completion... 00:08:20.553 Failure: test_invalid_db_write_overflow_cq 00:08:20.553 00:08:20.553 00:08:20.553 real 0m40.179s 00:08:20.553 user 0m34.226s 00:08:20.553 sys 0m5.600s 00:08:20.553 23:21:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.553 23:21:06 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 ************************************ 00:08:20.553 END TEST nvme_doorbell_aers 00:08:20.553 ************************************ 00:08:20.553 23:21:06 nvme -- nvme/nvme.sh@97 -- # uname 00:08:20.553 23:21:06 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:20.553 23:21:06 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:20.553 23:21:06 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:20.553 23:21:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.553 23:21:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.553 ************************************ 00:08:20.553 START TEST nvme_multi_aen 00:08:20.553 ************************************ 00:08:20.553 23:21:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:20.553 [2024-11-19 23:21:06.591905] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.591955] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.591966] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.593101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.593126] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.593134] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.594001] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.594020] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.594027] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.594898] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.594919] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 [2024-11-19 23:21:06.594926] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75014) is not found. Dropping the request. 00:08:20.553 Child process pid: 75540 00:08:20.812 [Child] Asynchronous Event Request test 00:08:20.812 [Child] Attached to 0000:00:10.0 00:08:20.812 [Child] Attached to 0000:00:11.0 00:08:20.812 [Child] Attached to 0000:00:13.0 00:08:20.812 [Child] Attached to 0000:00:12.0 00:08:20.812 [Child] Registering asynchronous event callbacks... 00:08:20.812 [Child] Getting orig temperature thresholds of all controllers 00:08:20.812 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:20.812 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 [Child] Cleaning up... 00:08:20.812 Asynchronous Event Request test 00:08:20.812 Attached to 0000:00:10.0 00:08:20.812 Attached to 0000:00:11.0 00:08:20.812 Attached to 0000:00:13.0 00:08:20.812 Attached to 0000:00:12.0 00:08:20.812 Reset controller to setup AER completions for this process 00:08:20.812 Registering asynchronous event callbacks... 00:08:20.812 Getting orig temperature thresholds of all controllers 00:08:20.812 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:20.812 Setting all controllers temperature threshold low to trigger AER 00:08:20.812 Waiting for all controllers temperature threshold to be set lower 00:08:20.812 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:20.812 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:20.812 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:20.812 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:20.812 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:20.812 Waiting for all controllers to trigger AER and reset threshold 00:08:20.812 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:20.812 Cleaning up... 00:08:20.812 00:08:20.812 real 0m0.398s 00:08:20.812 user 0m0.140s 00:08:20.812 sys 0m0.152s 00:08:20.812 23:21:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.812 23:21:06 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:20.812 ************************************ 00:08:20.812 END TEST nvme_multi_aen 00:08:20.812 ************************************ 00:08:20.812 23:21:06 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:20.812 23:21:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:20.812 23:21:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.812 23:21:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.812 ************************************ 00:08:20.812 START TEST nvme_startup 00:08:20.812 ************************************ 00:08:20.812 23:21:06 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:21.069 Initializing NVMe Controllers 00:08:21.069 Attached to 0000:00:10.0 00:08:21.069 Attached to 0000:00:11.0 00:08:21.069 Attached to 0000:00:13.0 00:08:21.069 Attached to 0000:00:12.0 00:08:21.069 Initialization complete. 00:08:21.069 Time used:130255.438 (us). 00:08:21.069 00:08:21.069 real 0m0.182s 00:08:21.069 user 0m0.060s 00:08:21.069 sys 0m0.080s 00:08:21.069 23:21:07 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.069 23:21:07 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:21.069 ************************************ 00:08:21.069 END TEST nvme_startup 00:08:21.069 ************************************ 00:08:21.069 23:21:07 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:21.069 23:21:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.069 23:21:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.069 23:21:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.069 ************************************ 00:08:21.069 START TEST nvme_multi_secondary 00:08:21.069 ************************************ 00:08:21.069 23:21:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:21.069 23:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75591 00:08:21.069 23:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:21.069 23:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75592 00:08:21.069 23:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:21.069 23:21:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:24.377 Initializing NVMe Controllers 00:08:24.377 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.377 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.377 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.377 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.377 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:24.377 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:24.377 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:24.377 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:24.377 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:24.377 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:24.377 Initialization complete. Launching workers. 00:08:24.377 ======================================================== 00:08:24.377 Latency(us) 00:08:24.377 Device Information : IOPS MiB/s Average min max 00:08:24.377 PCIE (0000:00:10.0) NSID 1 from core 1: 6859.70 26.80 2331.03 736.30 6029.51 00:08:24.377 PCIE (0000:00:11.0) NSID 1 from core 1: 6859.70 26.80 2332.05 762.53 5612.16 00:08:24.377 PCIE (0000:00:13.0) NSID 1 from core 1: 6859.70 26.80 2332.05 772.87 5892.79 00:08:24.377 PCIE (0000:00:12.0) NSID 1 from core 1: 6859.70 26.80 2332.08 770.81 6188.50 00:08:24.377 PCIE (0000:00:12.0) NSID 2 from core 1: 6859.70 26.80 2332.18 750.09 5687.20 00:08:24.377 PCIE (0000:00:12.0) NSID 3 from core 1: 6859.70 26.80 2332.25 722.05 5594.72 00:08:24.377 ======================================================== 00:08:24.377 Total : 41158.21 160.77 2331.94 722.05 6188.50 00:08:24.377 00:08:24.377 Initializing NVMe Controllers 00:08:24.377 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.377 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.377 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.377 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.377 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:24.377 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:24.377 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:24.377 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:24.377 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:24.377 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:24.377 Initialization complete. Launching workers. 00:08:24.377 ======================================================== 00:08:24.377 Latency(us) 00:08:24.377 Device Information : IOPS MiB/s Average min max 00:08:24.377 PCIE (0000:00:10.0) NSID 1 from core 2: 2854.77 11.15 5602.82 1121.42 17456.74 00:08:24.377 PCIE (0000:00:11.0) NSID 1 from core 2: 2854.77 11.15 5604.11 1134.77 17403.04 00:08:24.377 PCIE (0000:00:13.0) NSID 1 from core 2: 2854.77 11.15 5603.96 1143.49 13682.41 00:08:24.377 PCIE (0000:00:12.0) NSID 1 from core 2: 2854.77 11.15 5603.96 1137.75 13295.03 00:08:24.377 PCIE (0000:00:12.0) NSID 2 from core 2: 2854.77 11.15 5604.61 1087.72 13560.75 00:08:24.377 PCIE (0000:00:12.0) NSID 3 from core 2: 2854.77 11.15 5604.61 1109.20 17677.39 00:08:24.377 ======================================================== 00:08:24.377 Total : 17128.64 66.91 5604.01 1087.72 17677.39 00:08:24.377 00:08:24.377 23:21:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75591 00:08:26.291 Initializing NVMe Controllers 00:08:26.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:26.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:26.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:26.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:26.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:26.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:26.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:26.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:26.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:26.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:26.291 Initialization complete. Launching workers. 00:08:26.291 ======================================================== 00:08:26.291 Latency(us) 00:08:26.291 Device Information : IOPS MiB/s Average min max 00:08:26.291 PCIE (0000:00:10.0) NSID 1 from core 0: 9642.56 37.67 1658.06 700.11 7153.16 00:08:26.291 PCIE (0000:00:11.0) NSID 1 from core 0: 9642.56 37.67 1658.94 730.87 7054.43 00:08:26.291 PCIE (0000:00:13.0) NSID 1 from core 0: 9642.56 37.67 1658.91 720.98 6929.50 00:08:26.291 PCIE (0000:00:12.0) NSID 1 from core 0: 9642.56 37.67 1658.88 720.27 7023.79 00:08:26.291 PCIE (0000:00:12.0) NSID 2 from core 0: 9642.56 37.67 1658.85 564.38 7223.23 00:08:26.291 PCIE (0000:00:12.0) NSID 3 from core 0: 9642.56 37.67 1658.81 498.16 7133.24 00:08:26.291 ======================================================== 00:08:26.291 Total : 57855.34 226.00 1658.74 498.16 7223.23 00:08:26.291 00:08:26.291 23:21:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75592 00:08:26.291 23:21:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75661 00:08:26.291 23:21:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:26.291 23:21:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:26.291 23:21:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75662 00:08:26.291 23:21:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:29.579 Initializing NVMe Controllers 00:08:29.579 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:29.579 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:29.579 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:29.579 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:29.579 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:29.579 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:29.579 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:29.579 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:29.579 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:29.579 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:29.579 Initialization complete. Launching workers. 00:08:29.579 ======================================================== 00:08:29.579 Latency(us) 00:08:29.579 Device Information : IOPS MiB/s Average min max 00:08:29.579 PCIE (0000:00:10.0) NSID 1 from core 0: 7124.98 27.83 2244.24 819.60 6770.99 00:08:29.579 PCIE (0000:00:11.0) NSID 1 from core 0: 7124.98 27.83 2245.24 774.20 6306.39 00:08:29.579 PCIE (0000:00:13.0) NSID 1 from core 0: 7124.98 27.83 2245.34 792.09 6111.30 00:08:29.579 PCIE (0000:00:12.0) NSID 1 from core 0: 7124.98 27.83 2245.52 885.50 6100.09 00:08:29.579 PCIE (0000:00:12.0) NSID 2 from core 0: 7124.98 27.83 2245.53 812.33 6228.57 00:08:29.579 PCIE (0000:00:12.0) NSID 3 from core 0: 7124.98 27.83 2245.55 787.56 6306.09 00:08:29.579 ======================================================== 00:08:29.579 Total : 42749.89 166.99 2245.24 774.20 6770.99 00:08:29.579 00:08:29.579 Initializing NVMe Controllers 00:08:29.579 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:29.579 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:29.579 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:29.579 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:29.579 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:29.579 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:29.579 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:29.579 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:29.579 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:29.579 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:29.579 Initialization complete. Launching workers. 00:08:29.579 ======================================================== 00:08:29.579 Latency(us) 00:08:29.579 Device Information : IOPS MiB/s Average min max 00:08:29.579 PCIE (0000:00:10.0) NSID 1 from core 1: 7143.86 27.91 2238.30 801.24 8131.68 00:08:29.579 PCIE (0000:00:11.0) NSID 1 from core 1: 7143.86 27.91 2239.28 847.88 7360.01 00:08:29.579 PCIE (0000:00:13.0) NSID 1 from core 1: 7143.86 27.91 2239.32 849.71 7094.79 00:08:29.579 PCIE (0000:00:12.0) NSID 1 from core 1: 7143.86 27.91 2239.24 854.06 7199.93 00:08:29.579 PCIE (0000:00:12.0) NSID 2 from core 1: 7143.86 27.91 2239.15 838.44 7202.15 00:08:29.579 PCIE (0000:00:12.0) NSID 3 from core 1: 7143.86 27.91 2239.27 844.06 7034.96 00:08:29.579 ======================================================== 00:08:29.579 Total : 42863.17 167.43 2239.09 801.24 8131.68 00:08:29.579 00:08:31.496 Initializing NVMe Controllers 00:08:31.496 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:31.496 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:31.496 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:31.496 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:31.496 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:31.496 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:31.496 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:31.496 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:31.496 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:31.496 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:31.496 Initialization complete. Launching workers. 00:08:31.496 ======================================================== 00:08:31.496 Latency(us) 00:08:31.496 Device Information : IOPS MiB/s Average min max 00:08:31.496 PCIE (0000:00:10.0) NSID 1 from core 2: 3787.21 14.79 4222.88 823.44 13845.46 00:08:31.496 PCIE (0000:00:11.0) NSID 1 from core 2: 3787.21 14.79 4224.24 749.48 13667.40 00:08:31.496 PCIE (0000:00:13.0) NSID 1 from core 2: 3787.21 14.79 4223.54 840.50 15917.88 00:08:31.496 PCIE (0000:00:12.0) NSID 1 from core 2: 3787.21 14.79 4224.30 837.86 13353.75 00:08:31.496 PCIE (0000:00:12.0) NSID 2 from core 2: 3787.21 14.79 4224.00 811.38 13463.19 00:08:31.496 PCIE (0000:00:12.0) NSID 3 from core 2: 3787.21 14.79 4223.71 589.05 12529.49 00:08:31.496 ======================================================== 00:08:31.496 Total : 22723.25 88.76 4223.78 589.05 15917.88 00:08:31.496 00:08:31.757 23:21:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75661 00:08:31.757 23:21:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75662 00:08:31.757 00:08:31.757 real 0m10.630s 00:08:31.757 user 0m18.348s 00:08:31.757 sys 0m0.562s 00:08:31.757 23:21:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.757 23:21:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:31.757 ************************************ 00:08:31.757 END TEST nvme_multi_secondary 00:08:31.757 ************************************ 00:08:31.757 23:21:17 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:31.757 23:21:17 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74629 ]] 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1094 -- # kill 74629 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1095 -- # wait 74629 00:08:31.757 [2024-11-19 23:21:17.760821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.760926] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.760957] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.760991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.761862] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.761930] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.761958] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.761988] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.762836] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.762910] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.762939] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.762973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.763441] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.763474] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.763484] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 [2024-11-19 23:21:17.763494] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75539) is not found. Dropping the request. 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:31.757 23:21:17 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.757 23:21:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.757 ************************************ 00:08:31.757 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:31.757 ************************************ 00:08:31.757 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:31.757 * Looking for test storage... 00:08:31.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:31.757 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.757 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.757 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.018 --rc genhtml_branch_coverage=1 00:08:32.018 --rc genhtml_function_coverage=1 00:08:32.018 --rc genhtml_legend=1 00:08:32.018 --rc geninfo_all_blocks=1 00:08:32.018 --rc geninfo_unexecuted_blocks=1 00:08:32.018 00:08:32.018 ' 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.018 --rc genhtml_branch_coverage=1 00:08:32.018 --rc genhtml_function_coverage=1 00:08:32.018 --rc genhtml_legend=1 00:08:32.018 --rc geninfo_all_blocks=1 00:08:32.018 --rc geninfo_unexecuted_blocks=1 00:08:32.018 00:08:32.018 ' 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.018 --rc genhtml_branch_coverage=1 00:08:32.018 --rc genhtml_function_coverage=1 00:08:32.018 --rc genhtml_legend=1 00:08:32.018 --rc geninfo_all_blocks=1 00:08:32.018 --rc geninfo_unexecuted_blocks=1 00:08:32.018 00:08:32.018 ' 00:08:32.018 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.018 --rc genhtml_branch_coverage=1 00:08:32.018 --rc genhtml_function_coverage=1 00:08:32.018 --rc genhtml_legend=1 00:08:32.019 --rc geninfo_all_blocks=1 00:08:32.019 --rc geninfo_unexecuted_blocks=1 00:08:32.019 00:08:32.019 ' 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:32.019 23:21:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75821 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75821 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75821 ']' 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.019 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:32.019 [2024-11-19 23:21:18.105382] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:08:32.019 [2024-11-19 23:21:18.105500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75821 ] 00:08:32.280 [2024-11-19 23:21:18.272551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.280 [2024-11-19 23:21:18.296270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.280 [2024-11-19 23:21:18.296449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.280 [2024-11-19 23:21:18.296742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.280 [2024-11-19 23:21:18.296848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.851 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.851 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:32.851 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:32.851 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.851 23:21:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:32.851 nvme0n1 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_UP8oq.txt 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:32.851 true 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732058479 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75844 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:32.851 23:21:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.471 [2024-11-19 23:21:21.043557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:35.471 [2024-11-19 23:21:21.043953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:35.471 [2024-11-19 23:21:21.043983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:35.471 [2024-11-19 23:21:21.044021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:35.471 [2024-11-19 23:21:21.048102] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.471 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75844 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75844 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75844 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_UP8oq.txt 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:35.471 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_UP8oq.txt 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75821 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75821 ']' 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75821 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75821 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75821' 00:08:35.472 killing process with pid 75821 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75821 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75821 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:35.472 00:08:35.472 real 0m3.662s 00:08:35.472 user 0m13.098s 00:08:35.472 sys 0m0.480s 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.472 ************************************ 00:08:35.472 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:35.472 ************************************ 00:08:35.472 23:21:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:35.472 23:21:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:35.472 23:21:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:35.472 23:21:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.472 23:21:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.472 23:21:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:35.472 ************************************ 00:08:35.472 START TEST nvme_fio 00:08:35.472 ************************************ 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:35.472 23:21:21 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:35.472 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:35.734 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:35.734 23:21:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:35.995 23:21:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:35.995 23:21:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:35.995 23:21:22 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:36.254 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:36.254 fio-3.35 00:08:36.254 Starting 1 thread 00:08:42.836 00:08:42.836 test: (groupid=0, jobs=1): err= 0: pid=75973: Tue Nov 19 23:21:28 2024 00:08:42.836 read: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(174MiB/2001msec) 00:08:42.836 slat (usec): min=3, max=107, avg= 5.21, stdev= 2.37 00:08:42.836 clat (usec): min=279, max=13033, avg=2883.75, stdev=924.08 00:08:42.836 lat (usec): min=285, max=13073, avg=2888.96, stdev=925.45 00:08:42.836 clat percentiles (usec): 00:08:42.836 | 1.00th=[ 1860], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2343], 00:08:42.836 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2606], 00:08:42.836 | 70.00th=[ 2835], 80.00th=[ 3490], 90.00th=[ 4015], 95.00th=[ 4883], 00:08:42.836 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 9241], 00:08:42.836 | 99.99th=[12649] 00:08:42.836 bw ( KiB/s): min=84336, max=95000, per=100.00%, avg=90314.67, stdev=5448.37, samples=3 00:08:42.836 iops : min=21084, max=23750, avg=22578.67, stdev=1362.09, samples=3 00:08:42.836 write: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(172MiB/2001msec); 0 zone resets 00:08:42.836 slat (nsec): min=3422, max=73945, avg=5447.18, stdev=2363.39 00:08:42.836 clat (usec): min=300, max=12824, avg=2883.86, stdev=925.09 00:08:42.836 lat (usec): min=305, max=12842, avg=2889.31, stdev=926.42 00:08:42.836 clat percentiles (usec): 00:08:42.836 | 1.00th=[ 1860], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:08:42.836 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2606], 00:08:42.836 | 70.00th=[ 2835], 80.00th=[ 3490], 90.00th=[ 4015], 95.00th=[ 4883], 00:08:42.836 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 8356], 99.95th=[ 9765], 00:08:42.836 | 99.99th=[12387] 00:08:42.836 bw ( KiB/s): min=86392, max=93952, per=100.00%, avg=90565.67, stdev=3841.01, samples=3 00:08:42.836 iops : min=21598, max=23488, avg=22641.33, stdev=960.23, samples=3 00:08:42.836 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:42.836 lat (msec) : 2=1.93%, 4=87.76%, 10=10.24%, 20=0.04% 00:08:42.836 cpu : usr=99.10%, sys=0.10%, ctx=4, majf=0, minf=627 00:08:42.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:42.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:42.836 issued rwts: total=44416,44098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:42.836 00:08:42.836 Run status group 0 (all jobs): 00:08:42.836 READ: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=174MiB (182MB), run=2001-2001msec 00:08:42.836 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=172MiB (181MB), run=2001-2001msec 00:08:42.836 ----------------------------------------------------- 00:08:42.836 Suppressions used: 00:08:42.836 count bytes template 00:08:42.836 1 32 /usr/src/fio/parse.c 00:08:42.836 1 8 libtcmalloc_minimal.so 00:08:42.836 ----------------------------------------------------- 00:08:42.836 00:08:42.836 23:21:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:42.836 23:21:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:42.836 23:21:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:42.836 23:21:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:42.836 23:21:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:42.836 23:21:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:42.836 23:21:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:42.836 23:21:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:42.836 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.096 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.096 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.096 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.096 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.096 23:21:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:43.096 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.096 fio-3.35 00:08:43.096 Starting 1 thread 00:08:48.382 00:08:48.382 test: (groupid=0, jobs=1): err= 0: pid=76036: Tue Nov 19 23:21:34 2024 00:08:48.382 read: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(143MiB/2001msec) 00:08:48.382 slat (usec): min=4, max=257, avg= 6.53, stdev= 3.01 00:08:48.382 clat (usec): min=398, max=10068, avg=3470.52, stdev=1006.50 00:08:48.382 lat (usec): min=404, max=10122, avg=3477.04, stdev=1007.74 00:08:48.382 clat percentiles (usec): 00:08:48.382 | 1.00th=[ 2376], 5.00th=[ 2573], 10.00th=[ 2671], 20.00th=[ 2802], 00:08:48.382 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3163], 60.00th=[ 3294], 00:08:48.382 | 70.00th=[ 3490], 80.00th=[ 3884], 90.00th=[ 4883], 95.00th=[ 5735], 00:08:48.382 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8848], 00:08:48.382 | 99.99th=[ 9896] 00:08:48.382 bw ( KiB/s): min=72528, max=76440, per=100.00%, avg=74487.33, stdev=1956.01, samples=3 00:08:48.382 iops : min=18132, max=19110, avg=18621.67, stdev=489.00, samples=3 00:08:48.382 write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(143MiB/2001msec); 0 zone resets 00:08:48.382 slat (nsec): min=5092, max=92556, avg=6947.20, stdev=2784.95 00:08:48.382 clat (usec): min=455, max=9964, avg=3505.90, stdev=1012.57 00:08:48.382 lat (usec): min=461, max=9992, avg=3512.85, stdev=1013.81 00:08:48.382 clat percentiles (usec): 00:08:48.382 | 1.00th=[ 2409], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 2835], 00:08:48.382 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3163], 60.00th=[ 3326], 00:08:48.382 | 70.00th=[ 3523], 80.00th=[ 3949], 90.00th=[ 4948], 95.00th=[ 5800], 00:08:48.382 | 99.00th=[ 7242], 99.50th=[ 7570], 99.90th=[ 8356], 99.95th=[ 8586], 00:08:48.382 | 99.99th=[ 9765] 00:08:48.382 bw ( KiB/s): min=72576, max=76432, per=100.00%, avg=74537.67, stdev=1928.88, samples=3 00:08:48.382 iops : min=18144, max=19108, avg=18634.33, stdev=482.22, samples=3 00:08:48.382 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:08:48.382 lat (msec) : 2=0.17%, 4=81.10%, 10=18.70%, 20=0.01% 00:08:48.382 cpu : usr=98.95%, sys=0.00%, ctx=4, majf=0, minf=626 00:08:48.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:48.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.382 issued rwts: total=36582,36573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.382 00:08:48.382 Run status group 0 (all jobs): 00:08:48.382 READ: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=143MiB (150MB), run=2001-2001msec 00:08:48.382 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=143MiB (150MB), run=2001-2001msec 00:08:48.643 ----------------------------------------------------- 00:08:48.643 Suppressions used: 00:08:48.643 count bytes template 00:08:48.643 1 32 /usr/src/fio/parse.c 00:08:48.643 1 8 libtcmalloc_minimal.so 00:08:48.643 ----------------------------------------------------- 00:08:48.643 00:08:48.643 23:21:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:48.643 23:21:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:48.643 23:21:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:48.643 23:21:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:48.903 23:21:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:48.903 23:21:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:49.229 23:21:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:49.229 23:21:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:49.229 23:21:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:49.229 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:49.229 fio-3.35 00:08:49.229 Starting 1 thread 00:08:55.836 00:08:55.836 test: (groupid=0, jobs=1): err= 0: pid=76097: Tue Nov 19 23:21:41 2024 00:08:55.836 read: IOPS=18.7k, BW=73.0MiB/s (76.6MB/s)(146MiB/2001msec) 00:08:55.836 slat (nsec): min=4311, max=81900, avg=5729.94, stdev=2924.69 00:08:55.836 clat (usec): min=1180, max=11727, avg=3401.25, stdev=1165.67 00:08:55.836 lat (usec): min=1186, max=11774, avg=3406.98, stdev=1166.97 00:08:55.836 clat percentiles (usec): 00:08:55.836 | 1.00th=[ 2008], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:08:55.836 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3130], 00:08:55.836 | 70.00th=[ 3392], 80.00th=[ 4080], 90.00th=[ 5145], 95.00th=[ 6063], 00:08:55.836 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 8848], 99.95th=[10421], 00:08:55.836 | 99.99th=[11600] 00:08:55.836 bw ( KiB/s): min=73232, max=84816, per=100.00%, avg=77182.67, stdev=6612.02, samples=3 00:08:55.836 iops : min=18308, max=21204, avg=19295.67, stdev=1653.00, samples=3 00:08:55.836 write: IOPS=18.7k, BW=73.0MiB/s (76.6MB/s)(146MiB/2001msec); 0 zone resets 00:08:55.836 slat (usec): min=4, max=101, avg= 5.85, stdev= 2.96 00:08:55.836 clat (usec): min=1199, max=11661, avg=3424.74, stdev=1168.72 00:08:55.836 lat (usec): min=1205, max=11675, avg=3430.59, stdev=1170.04 00:08:55.836 clat percentiles (usec): 00:08:55.836 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2638], 00:08:55.836 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3163], 00:08:55.836 | 70.00th=[ 3392], 80.00th=[ 4113], 90.00th=[ 5145], 95.00th=[ 6063], 00:08:55.836 | 99.00th=[ 7373], 99.50th=[ 7898], 99.90th=[ 8979], 99.95th=[10552], 00:08:55.836 | 99.99th=[11600] 00:08:55.836 bw ( KiB/s): min=73360, max=85056, per=100.00%, avg=77297.67, stdev=6719.17, samples=3 00:08:55.836 iops : min=18340, max=21264, avg=19324.33, stdev=1679.86, samples=3 00:08:55.836 lat (msec) : 2=0.95%, 4=78.00%, 10=20.98%, 20=0.07% 00:08:55.836 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=627 00:08:55.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:55.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.836 issued rwts: total=37409,37410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.836 00:08:55.836 Run status group 0 (all jobs): 00:08:55.836 READ: bw=73.0MiB/s (76.6MB/s), 73.0MiB/s-73.0MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2001-2001msec 00:08:55.836 WRITE: bw=73.0MiB/s (76.6MB/s), 73.0MiB/s-73.0MiB/s (76.6MB/s-76.6MB/s), io=146MiB (153MB), run=2001-2001msec 00:08:55.836 ----------------------------------------------------- 00:08:55.836 Suppressions used: 00:08:55.836 count bytes template 00:08:55.836 1 32 /usr/src/fio/parse.c 00:08:55.836 1 8 libtcmalloc_minimal.so 00:08:55.836 ----------------------------------------------------- 00:08:55.836 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:55.836 23:21:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:55.836 23:21:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:55.836 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:55.836 fio-3.35 00:08:55.836 Starting 1 thread 00:09:01.104 00:09:01.104 test: (groupid=0, jobs=1): err= 0: pid=76151: Tue Nov 19 23:21:46 2024 00:09:01.104 read: IOPS=19.5k, BW=76.1MiB/s (79.8MB/s)(152MiB/2001msec) 00:09:01.104 slat (nsec): min=4270, max=91397, avg=5497.18, stdev=2739.93 00:09:01.104 clat (usec): min=347, max=10119, avg=3266.22, stdev=1193.80 00:09:01.104 lat (usec): min=352, max=10157, avg=3271.72, stdev=1195.12 00:09:01.104 clat percentiles (usec): 00:09:01.104 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:01.104 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2933], 00:09:01.104 | 70.00th=[ 3261], 80.00th=[ 3982], 90.00th=[ 5211], 95.00th=[ 5997], 00:09:01.104 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8848], 99.95th=[ 9241], 00:09:01.104 | 99.99th=[ 9896] 00:09:01.104 bw ( KiB/s): min=68232, max=81728, per=97.79%, avg=76242.67, stdev=7093.55, samples=3 00:09:01.104 iops : min=17058, max=20432, avg=19060.67, stdev=1773.39, samples=3 00:09:01.104 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:09:01.104 slat (nsec): min=4312, max=91915, avg=5687.11, stdev=2916.38 00:09:01.104 clat (usec): min=339, max=9949, avg=3286.09, stdev=1194.91 00:09:01.104 lat (usec): min=344, max=9962, avg=3291.78, stdev=1196.29 00:09:01.104 clat percentiles (usec): 00:09:01.104 | 1.00th=[ 2057], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:01.104 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:01.104 | 70.00th=[ 3294], 80.00th=[ 4015], 90.00th=[ 5211], 95.00th=[ 5997], 00:09:01.104 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8979], 99.95th=[ 9241], 00:09:01.104 | 99.99th=[ 9634] 00:09:01.104 bw ( KiB/s): min=68480, max=81440, per=98.14%, avg=76392.00, stdev=6938.46, samples=3 00:09:01.104 iops : min=17120, max=20360, avg=19098.00, stdev=1734.62, samples=3 00:09:01.104 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:09:01.104 lat (msec) : 2=0.60%, 4=79.34%, 10=20.00%, 20=0.01% 00:09:01.104 cpu : usr=98.50%, sys=0.35%, ctx=3, majf=0, minf=626 00:09:01.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:01.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:01.104 issued rwts: total=39003,38940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:01.104 00:09:01.104 Run status group 0 (all jobs): 00:09:01.104 READ: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=152MiB (160MB), run=2001-2001msec 00:09:01.104 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (159MB), run=2001-2001msec 00:09:01.104 ----------------------------------------------------- 00:09:01.104 Suppressions used: 00:09:01.104 count bytes template 00:09:01.104 1 32 /usr/src/fio/parse.c 00:09:01.104 1 8 libtcmalloc_minimal.so 00:09:01.104 ----------------------------------------------------- 00:09:01.104 00:09:01.104 23:21:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:01.104 23:21:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:01.104 00:09:01.104 real 0m25.093s 00:09:01.104 user 0m16.069s 00:09:01.104 sys 0m15.840s 00:09:01.104 23:21:46 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.104 23:21:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:01.104 ************************************ 00:09:01.104 END TEST nvme_fio 00:09:01.104 ************************************ 00:09:01.104 00:09:01.104 real 1m32.472s 00:09:01.104 user 3m31.535s 00:09:01.104 sys 0m25.888s 00:09:01.104 23:21:46 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.104 ************************************ 00:09:01.104 END TEST nvme 00:09:01.104 ************************************ 00:09:01.104 23:21:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.104 23:21:46 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:01.104 23:21:46 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:01.104 23:21:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.104 23:21:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.104 23:21:46 -- common/autotest_common.sh@10 -- # set +x 00:09:01.104 ************************************ 00:09:01.104 START TEST nvme_scc 00:09:01.104 ************************************ 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:01.104 * Looking for test storage... 00:09:01.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.104 --rc genhtml_branch_coverage=1 00:09:01.104 --rc genhtml_function_coverage=1 00:09:01.104 --rc genhtml_legend=1 00:09:01.104 --rc geninfo_all_blocks=1 00:09:01.104 --rc geninfo_unexecuted_blocks=1 00:09:01.104 00:09:01.104 ' 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.104 --rc genhtml_branch_coverage=1 00:09:01.104 --rc genhtml_function_coverage=1 00:09:01.104 --rc genhtml_legend=1 00:09:01.104 --rc geninfo_all_blocks=1 00:09:01.104 --rc geninfo_unexecuted_blocks=1 00:09:01.104 00:09:01.104 ' 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.104 --rc genhtml_branch_coverage=1 00:09:01.104 --rc genhtml_function_coverage=1 00:09:01.104 --rc genhtml_legend=1 00:09:01.104 --rc geninfo_all_blocks=1 00:09:01.104 --rc geninfo_unexecuted_blocks=1 00:09:01.104 00:09:01.104 ' 00:09:01.104 23:21:46 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.104 --rc genhtml_branch_coverage=1 00:09:01.104 --rc genhtml_function_coverage=1 00:09:01.104 --rc genhtml_legend=1 00:09:01.104 --rc geninfo_all_blocks=1 00:09:01.104 --rc geninfo_unexecuted_blocks=1 00:09:01.104 00:09:01.104 ' 00:09:01.104 23:21:46 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.104 23:21:46 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.104 23:21:46 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.104 23:21:46 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.104 23:21:46 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.104 23:21:46 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:01.104 23:21:46 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:01.104 23:21:46 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:01.104 23:21:46 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:01.104 23:21:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:01.104 23:21:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:01.104 23:21:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:01.104 23:21:46 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:01.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:01.362 Waiting for block devices as requested 00:09:01.362 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.362 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.620 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:01.620 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:06.894 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:06.894 23:21:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:06.894 23:21:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:06.894 23:21:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:06.894 23:21:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.894 23:21:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.894 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:06.895 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.896 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.897 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:06.898 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.899 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:06.900 23:21:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:06.900 23:21:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:06.900 23:21:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.900 23:21:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:06.900 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:06.901 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.902 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.903 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.904 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.905 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:06.906 23:21:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:06.906 23:21:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:06.906 23:21:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.906 23:21:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.906 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.907 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.908 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.909 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:06.910 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.911 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:06.912 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:06.913 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.914 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:06.915 23:21:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:06.915 23:21:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:06.915 23:21:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:06.916 23:21:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:06.916 23:21:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.916 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.917 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:06.918 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:06.919 23:21:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:06.919 23:21:53 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:06.920 23:21:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:06.920 23:21:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:06.920 23:21:53 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:07.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.051 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.051 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.051 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.051 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:08.051 23:21:54 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:08.051 23:21:54 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:08.051 23:21:54 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.051 23:21:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:08.051 ************************************ 00:09:08.051 START TEST nvme_simple_copy 00:09:08.051 ************************************ 00:09:08.051 23:21:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:08.310 Initializing NVMe Controllers 00:09:08.310 Attaching to 0000:00:10.0 00:09:08.310 Controller supports SCC. Attached to 0000:00:10.0 00:09:08.310 Namespace ID: 1 size: 6GB 00:09:08.310 Initialization complete. 00:09:08.310 00:09:08.310 Controller QEMU NVMe Ctrl (12340 ) 00:09:08.310 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:08.310 Namespace Block Size:4096 00:09:08.310 Writing LBAs 0 to 63 with Random Data 00:09:08.310 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:08.310 LBAs matching Written Data: 64 00:09:08.310 00:09:08.310 real 0m0.256s 00:09:08.310 user 0m0.094s 00:09:08.310 sys 0m0.061s 00:09:08.310 ************************************ 00:09:08.310 END TEST nvme_simple_copy 00:09:08.310 ************************************ 00:09:08.310 23:21:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.310 23:21:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:08.567 ************************************ 00:09:08.567 END TEST nvme_scc 00:09:08.567 ************************************ 00:09:08.567 00:09:08.567 real 0m7.725s 00:09:08.567 user 0m1.045s 00:09:08.567 sys 0m1.427s 00:09:08.567 23:21:54 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.567 23:21:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:08.567 23:21:54 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:08.567 23:21:54 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:08.567 23:21:54 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:08.567 23:21:54 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:08.567 23:21:54 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:08.567 23:21:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.567 23:21:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.567 23:21:54 -- common/autotest_common.sh@10 -- # set +x 00:09:08.567 ************************************ 00:09:08.567 START TEST nvme_fdp 00:09:08.567 ************************************ 00:09:08.567 23:21:54 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:08.567 * Looking for test storage... 00:09:08.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:08.567 23:21:54 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.567 23:21:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.567 23:21:54 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.567 23:21:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.567 23:21:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:08.568 23:21:54 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.568 23:21:54 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.568 --rc genhtml_branch_coverage=1 00:09:08.568 --rc genhtml_function_coverage=1 00:09:08.568 --rc genhtml_legend=1 00:09:08.568 --rc geninfo_all_blocks=1 00:09:08.568 --rc geninfo_unexecuted_blocks=1 00:09:08.568 00:09:08.568 ' 00:09:08.568 23:21:54 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.568 --rc genhtml_branch_coverage=1 00:09:08.568 --rc genhtml_function_coverage=1 00:09:08.568 --rc genhtml_legend=1 00:09:08.568 --rc geninfo_all_blocks=1 00:09:08.568 --rc geninfo_unexecuted_blocks=1 00:09:08.568 00:09:08.568 ' 00:09:08.568 23:21:54 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.568 --rc genhtml_branch_coverage=1 00:09:08.568 --rc genhtml_function_coverage=1 00:09:08.568 --rc genhtml_legend=1 00:09:08.568 --rc geninfo_all_blocks=1 00:09:08.568 --rc geninfo_unexecuted_blocks=1 00:09:08.568 00:09:08.568 ' 00:09:08.568 23:21:54 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.568 --rc genhtml_branch_coverage=1 00:09:08.568 --rc genhtml_function_coverage=1 00:09:08.568 --rc genhtml_legend=1 00:09:08.568 --rc geninfo_all_blocks=1 00:09:08.568 --rc geninfo_unexecuted_blocks=1 00:09:08.568 00:09:08.568 ' 00:09:08.568 23:21:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.568 23:21:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.568 23:21:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.568 23:21:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.568 23:21:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.568 23:21:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:08.568 23:21:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:08.568 23:21:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:08.568 23:21:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.568 23:21:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:08.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:09.082 Waiting for block devices as requested 00:09:09.082 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.083 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.341 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.341 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.618 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:14.618 23:22:00 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:14.618 23:22:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.618 23:22:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:14.618 23:22:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.618 23:22:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:14.618 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:14.619 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:14.620 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.621 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:14.622 23:22:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:14.623 23:22:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.623 23:22:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:14.623 23:22:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.623 23:22:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.623 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:14.624 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:14.625 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.626 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:14.627 23:22:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.627 23:22:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:14.627 23:22:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.627 23:22:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.627 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.628 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:14.629 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.630 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.631 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.632 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.633 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.634 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:14.635 23:22:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:14.635 23:22:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:14.635 23:22:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:14.635 23:22:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.635 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.636 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:14.637 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:14.638 23:22:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:14.638 23:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:14.639 23:22:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:14.639 23:22:00 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:14.639 23:22:00 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:15.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.768 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.768 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.768 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.768 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.768 23:22:01 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:15.768 23:22:01 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:15.768 23:22:01 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.768 23:22:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:15.768 ************************************ 00:09:15.768 START TEST nvme_flexible_data_placement 00:09:15.768 ************************************ 00:09:15.768 23:22:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:16.026 Initializing NVMe Controllers 00:09:16.026 Attaching to 0000:00:13.0 00:09:16.027 Controller supports FDP Attached to 0000:00:13.0 00:09:16.027 Namespace ID: 1 Endurance Group ID: 1 00:09:16.027 Initialization complete. 00:09:16.027 00:09:16.027 ================================== 00:09:16.027 == FDP tests for Namespace: #01 == 00:09:16.027 ================================== 00:09:16.027 00:09:16.027 Get Feature: FDP: 00:09:16.027 ================= 00:09:16.027 Enabled: Yes 00:09:16.027 FDP configuration Index: 0 00:09:16.027 00:09:16.027 FDP configurations log page 00:09:16.027 =========================== 00:09:16.027 Number of FDP configurations: 1 00:09:16.027 Version: 0 00:09:16.027 Size: 112 00:09:16.027 FDP Configuration Descriptor: 0 00:09:16.027 Descriptor Size: 96 00:09:16.027 Reclaim Group Identifier format: 2 00:09:16.027 FDP Volatile Write Cache: Not Present 00:09:16.027 FDP Configuration: Valid 00:09:16.027 Vendor Specific Size: 0 00:09:16.027 Number of Reclaim Groups: 2 00:09:16.027 Number of Recalim Unit Handles: 8 00:09:16.027 Max Placement Identifiers: 128 00:09:16.027 Number of Namespaces Suppprted: 256 00:09:16.027 Reclaim unit Nominal Size: 6000000 bytes 00:09:16.027 Estimated Reclaim Unit Time Limit: Not Reported 00:09:16.027 RUH Desc #000: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #001: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #002: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #003: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #004: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #005: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #006: RUH Type: Initially Isolated 00:09:16.027 RUH Desc #007: RUH Type: Initially Isolated 00:09:16.027 00:09:16.027 FDP reclaim unit handle usage log page 00:09:16.027 ====================================== 00:09:16.027 Number of Reclaim Unit Handles: 8 00:09:16.027 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:16.027 RUH Usage Desc #001: RUH Attributes: Unused 00:09:16.027 RUH Usage Desc #002: RUH Attributes: Unused 00:09:16.027 RUH Usage Desc #003: RUH Attributes: Unused 00:09:16.027 RUH Usage Desc #004: RUH Attributes: Unused 00:09:16.027 RUH Usage Desc #005: RUH Attributes: Unused 00:09:16.027 RUH Usage Desc #006: RUH Attributes: Unused 00:09:16.027 RUH Usage Desc #007: RUH Attributes: Unused 00:09:16.027 00:09:16.027 FDP statistics log page 00:09:16.027 ======================= 00:09:16.027 Host bytes with metadata written: 2159243264 00:09:16.027 Media bytes with metadata written: 2159587328 00:09:16.027 Media bytes erased: 0 00:09:16.027 00:09:16.027 FDP Reclaim unit handle status 00:09:16.027 ============================== 00:09:16.027 Number of RUHS descriptors: 2 00:09:16.027 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000034c9 00:09:16.027 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:16.027 00:09:16.027 FDP write on placement id: 0 success 00:09:16.027 00:09:16.027 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:16.027 00:09:16.027 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:16.027 00:09:16.027 Get Feature: FDP Events for Placement handle: #0 00:09:16.027 ======================== 00:09:16.027 Number of FDP Events: 6 00:09:16.027 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:16.027 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:16.027 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:16.027 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:16.027 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:16.027 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:16.027 00:09:16.027 FDP events log page 00:09:16.027 =================== 00:09:16.027 Number of FDP events: 1 00:09:16.027 FDP Event #0: 00:09:16.027 Event Type: RU Not Written to Capacity 00:09:16.027 Placement Identifier: Valid 00:09:16.027 NSID: Valid 00:09:16.027 Location: Valid 00:09:16.027 Placement Identifier: 0 00:09:16.027 Event Timestamp: 2 00:09:16.027 Namespace Identifier: 1 00:09:16.027 Reclaim Group Identifier: 0 00:09:16.027 Reclaim Unit Handle Identifier: 0 00:09:16.027 00:09:16.027 FDP test passed 00:09:16.027 00:09:16.027 real 0m0.226s 00:09:16.027 user 0m0.071s 00:09:16.027 sys 0m0.053s 00:09:16.027 23:22:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.027 ************************************ 00:09:16.027 END TEST nvme_flexible_data_placement 00:09:16.027 ************************************ 00:09:16.027 23:22:02 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:16.027 ************************************ 00:09:16.027 END TEST nvme_fdp 00:09:16.027 ************************************ 00:09:16.027 00:09:16.027 real 0m7.584s 00:09:16.027 user 0m1.002s 00:09:16.027 sys 0m1.404s 00:09:16.027 23:22:02 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.027 23:22:02 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:16.027 23:22:02 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:16.027 23:22:02 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:16.027 23:22:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.027 23:22:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.027 23:22:02 -- common/autotest_common.sh@10 -- # set +x 00:09:16.027 ************************************ 00:09:16.027 START TEST nvme_rpc 00:09:16.027 ************************************ 00:09:16.027 23:22:02 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:16.286 * Looking for test storage... 00:09:16.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.286 23:22:02 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.286 --rc genhtml_branch_coverage=1 00:09:16.286 --rc genhtml_function_coverage=1 00:09:16.286 --rc genhtml_legend=1 00:09:16.286 --rc geninfo_all_blocks=1 00:09:16.286 --rc geninfo_unexecuted_blocks=1 00:09:16.286 00:09:16.286 ' 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.286 --rc genhtml_branch_coverage=1 00:09:16.286 --rc genhtml_function_coverage=1 00:09:16.286 --rc genhtml_legend=1 00:09:16.286 --rc geninfo_all_blocks=1 00:09:16.286 --rc geninfo_unexecuted_blocks=1 00:09:16.286 00:09:16.286 ' 00:09:16.286 23:22:02 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.286 --rc genhtml_branch_coverage=1 00:09:16.286 --rc genhtml_function_coverage=1 00:09:16.286 --rc genhtml_legend=1 00:09:16.286 --rc geninfo_all_blocks=1 00:09:16.286 --rc geninfo_unexecuted_blocks=1 00:09:16.286 00:09:16.287 ' 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.287 --rc genhtml_branch_coverage=1 00:09:16.287 --rc genhtml_function_coverage=1 00:09:16.287 --rc genhtml_legend=1 00:09:16.287 --rc geninfo_all_blocks=1 00:09:16.287 --rc geninfo_unexecuted_blocks=1 00:09:16.287 00:09:16.287 ' 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:16.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77516 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:16.287 23:22:02 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77516 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77516 ']' 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.287 23:22:02 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.545 [2024-11-19 23:22:02.484287] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:09:16.545 [2024-11-19 23:22:02.484402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77516 ] 00:09:16.545 [2024-11-19 23:22:02.641766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:16.545 [2024-11-19 23:22:02.662176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.545 [2024-11-19 23:22:02.662188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.477 23:22:03 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.477 23:22:03 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:17.477 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:17.477 Nvme0n1 00:09:17.477 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:17.477 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:17.746 request: 00:09:17.746 { 00:09:17.746 "bdev_name": "Nvme0n1", 00:09:17.746 "filename": "non_existing_file", 00:09:17.746 "method": "bdev_nvme_apply_firmware", 00:09:17.746 "req_id": 1 00:09:17.746 } 00:09:17.746 Got JSON-RPC error response 00:09:17.746 response: 00:09:17.746 { 00:09:17.746 "code": -32603, 00:09:17.746 "message": "open file failed." 00:09:17.746 } 00:09:17.746 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:17.746 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:17.746 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:18.016 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:18.016 23:22:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77516 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77516 ']' 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77516 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77516 00:09:18.016 killing process with pid 77516 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77516' 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77516 00:09:18.016 23:22:03 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77516 00:09:18.275 ************************************ 00:09:18.275 END TEST nvme_rpc 00:09:18.275 ************************************ 00:09:18.275 00:09:18.275 real 0m2.037s 00:09:18.275 user 0m3.964s 00:09:18.275 sys 0m0.446s 00:09:18.275 23:22:04 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.275 23:22:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.275 23:22:04 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:18.275 23:22:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.275 23:22:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.275 23:22:04 -- common/autotest_common.sh@10 -- # set +x 00:09:18.275 ************************************ 00:09:18.275 START TEST nvme_rpc_timeouts 00:09:18.275 ************************************ 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:18.275 * Looking for test storage... 00:09:18.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.275 23:22:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:18.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.275 --rc genhtml_branch_coverage=1 00:09:18.275 --rc genhtml_function_coverage=1 00:09:18.275 --rc genhtml_legend=1 00:09:18.275 --rc geninfo_all_blocks=1 00:09:18.275 --rc geninfo_unexecuted_blocks=1 00:09:18.275 00:09:18.275 ' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:18.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.275 --rc genhtml_branch_coverage=1 00:09:18.275 --rc genhtml_function_coverage=1 00:09:18.275 --rc genhtml_legend=1 00:09:18.275 --rc geninfo_all_blocks=1 00:09:18.275 --rc geninfo_unexecuted_blocks=1 00:09:18.275 00:09:18.275 ' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:18.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.275 --rc genhtml_branch_coverage=1 00:09:18.275 --rc genhtml_function_coverage=1 00:09:18.275 --rc genhtml_legend=1 00:09:18.275 --rc geninfo_all_blocks=1 00:09:18.275 --rc geninfo_unexecuted_blocks=1 00:09:18.275 00:09:18.275 ' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:18.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.275 --rc genhtml_branch_coverage=1 00:09:18.275 --rc genhtml_function_coverage=1 00:09:18.275 --rc genhtml_legend=1 00:09:18.275 --rc geninfo_all_blocks=1 00:09:18.275 --rc geninfo_unexecuted_blocks=1 00:09:18.275 00:09:18.275 ' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77569 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77569 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77606 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77606 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77606 ']' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.275 23:22:04 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:18.275 23:22:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:18.534 [2024-11-19 23:22:04.515183] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:09:18.534 [2024-11-19 23:22:04.515438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77606 ] 00:09:18.534 [2024-11-19 23:22:04.672235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:18.534 [2024-11-19 23:22:04.692263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.534 [2024-11-19 23:22:04.692392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.467 Checking default timeout settings: 00:09:19.467 23:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.467 23:22:05 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:19.467 23:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:19.467 23:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:19.724 Making settings changes with rpc: 00:09:19.724 23:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:19.724 23:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:19.724 Check default vs. modified settings: 00:09:19.724 23:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:19.724 23:22:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.290 Setting action_on_timeout is changed as expected. 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.290 Setting timeout_us is changed as expected. 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.290 Setting timeout_admin_us is changed as expected. 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77569 /tmp/settings_modified_77569 00:09:20.290 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77606 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77606 ']' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77606 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77606 00:09:20.290 killing process with pid 77606 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77606' 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77606 00:09:20.290 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77606 00:09:20.549 RPC TIMEOUT SETTING TEST PASSED. 00:09:20.549 23:22:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:20.549 00:09:20.549 real 0m2.231s 00:09:20.549 user 0m4.462s 00:09:20.549 sys 0m0.488s 00:09:20.549 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.549 ************************************ 00:09:20.549 END TEST nvme_rpc_timeouts 00:09:20.549 ************************************ 00:09:20.549 23:22:06 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:20.549 23:22:06 -- spdk/autotest.sh@239 -- # uname -s 00:09:20.549 23:22:06 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:20.549 23:22:06 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:20.549 23:22:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.549 23:22:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.549 23:22:06 -- common/autotest_common.sh@10 -- # set +x 00:09:20.549 ************************************ 00:09:20.549 START TEST sw_hotplug 00:09:20.549 ************************************ 00:09:20.549 23:22:06 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:20.549 * Looking for test storage... 00:09:20.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.549 23:22:06 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.549 23:22:06 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.549 23:22:06 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.549 23:22:06 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.549 23:22:06 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.550 23:22:06 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.550 23:22:06 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:20.550 23:22:06 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.550 23:22:06 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.550 --rc genhtml_branch_coverage=1 00:09:20.550 --rc genhtml_function_coverage=1 00:09:20.550 --rc genhtml_legend=1 00:09:20.550 --rc geninfo_all_blocks=1 00:09:20.550 --rc geninfo_unexecuted_blocks=1 00:09:20.550 00:09:20.550 ' 00:09:20.550 23:22:06 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.550 --rc genhtml_branch_coverage=1 00:09:20.550 --rc genhtml_function_coverage=1 00:09:20.550 --rc genhtml_legend=1 00:09:20.550 --rc geninfo_all_blocks=1 00:09:20.550 --rc geninfo_unexecuted_blocks=1 00:09:20.550 00:09:20.550 ' 00:09:20.550 23:22:06 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.550 --rc genhtml_branch_coverage=1 00:09:20.550 --rc genhtml_function_coverage=1 00:09:20.550 --rc genhtml_legend=1 00:09:20.550 --rc geninfo_all_blocks=1 00:09:20.550 --rc geninfo_unexecuted_blocks=1 00:09:20.550 00:09:20.550 ' 00:09:20.550 23:22:06 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.550 --rc genhtml_branch_coverage=1 00:09:20.550 --rc genhtml_function_coverage=1 00:09:20.550 --rc genhtml_legend=1 00:09:20.550 --rc geninfo_all_blocks=1 00:09:20.550 --rc geninfo_unexecuted_blocks=1 00:09:20.550 00:09:20.550 ' 00:09:20.550 23:22:06 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:21.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.116 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.116 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.116 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.116 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:21.116 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:21.116 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:21.117 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:21.117 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:21.117 23:22:07 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.117 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:21.117 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:21.117 23:22:07 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.375 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.633 Waiting for block devices as requested 00:09:21.633 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.633 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.891 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:21.891 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.244 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:27.244 23:22:13 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:27.244 23:22:13 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:27.244 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:27.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.509 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:27.768 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:28.025 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.025 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:28.025 23:22:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78452 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:28.025 23:22:14 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:28.025 23:22:14 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:28.025 23:22:14 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:28.025 23:22:14 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:28.025 23:22:14 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:28.025 23:22:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:28.282 Initializing NVMe Controllers 00:09:28.282 Attaching to 0000:00:10.0 00:09:28.282 Attaching to 0000:00:11.0 00:09:28.282 Attached to 0000:00:11.0 00:09:28.282 Attached to 0000:00:10.0 00:09:28.282 Initialization complete. Starting I/O... 00:09:28.282 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:28.282 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:28.282 00:09:29.212 QEMU NVMe Ctrl (12341 ): 3144 I/Os completed (+3144) 00:09:29.212 QEMU NVMe Ctrl (12340 ): 3209 I/Os completed (+3209) 00:09:29.212 00:09:30.142 QEMU NVMe Ctrl (12341 ): 6724 I/Os completed (+3580) 00:09:30.142 QEMU NVMe Ctrl (12340 ): 6897 I/Os completed (+3688) 00:09:30.142 00:09:31.520 QEMU NVMe Ctrl (12341 ): 10401 I/Os completed (+3677) 00:09:31.520 QEMU NVMe Ctrl (12340 ): 10602 I/Os completed (+3705) 00:09:31.520 00:09:32.454 QEMU NVMe Ctrl (12341 ): 14072 I/Os completed (+3671) 00:09:32.454 QEMU NVMe Ctrl (12340 ): 14279 I/Os completed (+3677) 00:09:32.454 00:09:33.384 QEMU NVMe Ctrl (12341 ): 17862 I/Os completed (+3790) 00:09:33.384 QEMU NVMe Ctrl (12340 ): 18219 I/Os completed (+3940) 00:09:33.384 00:09:33.949 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:33.949 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:33.949 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:33.949 [2024-11-19 23:22:20.128113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:33.949 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:33.949 [2024-11-19 23:22:20.129222] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 [2024-11-19 23:22:20.129258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 [2024-11-19 23:22:20.129271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 [2024-11-19 23:22:20.129285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:33.949 [2024-11-19 23:22:20.130398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 [2024-11-19 23:22:20.130441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 [2024-11-19 23:22:20.130454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:33.949 [2024-11-19 23:22:20.130467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:34.207 [2024-11-19 23:22:20.151459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:34.207 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:34.207 [2024-11-19 23:22:20.152439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 [2024-11-19 23:22:20.152483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 [2024-11-19 23:22:20.152500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 [2024-11-19 23:22:20.152513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:34.207 [2024-11-19 23:22:20.153572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 [2024-11-19 23:22:20.153596] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 [2024-11-19 23:22:20.153612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 [2024-11-19 23:22:20.153624] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:34.207 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:34.207 EAL: Scan for (pci) bus failed. 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:34.207 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:34.207 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:34.207 Attaching to 0000:00:10.0 00:09:34.207 Attached to 0000:00:10.0 00:09:34.465 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:34.465 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:34.465 23:22:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:34.465 Attaching to 0000:00:11.0 00:09:34.465 Attached to 0000:00:11.0 00:09:35.397 QEMU NVMe Ctrl (12340 ): 4049 I/Os completed (+4049) 00:09:35.397 QEMU NVMe Ctrl (12341 ): 3772 I/Os completed (+3772) 00:09:35.397 00:09:36.329 QEMU NVMe Ctrl (12340 ): 8196 I/Os completed (+4147) 00:09:36.329 QEMU NVMe Ctrl (12341 ): 7952 I/Os completed (+4180) 00:09:36.329 00:09:37.294 QEMU NVMe Ctrl (12340 ): 12173 I/Os completed (+3977) 00:09:37.294 QEMU NVMe Ctrl (12341 ): 11982 I/Os completed (+4030) 00:09:37.294 00:09:38.227 QEMU NVMe Ctrl (12340 ): 16353 I/Os completed (+4180) 00:09:38.227 QEMU NVMe Ctrl (12341 ): 16155 I/Os completed (+4173) 00:09:38.227 00:09:39.160 QEMU NVMe Ctrl (12340 ): 20576 I/Os completed (+4223) 00:09:39.160 QEMU NVMe Ctrl (12341 ): 20379 I/Os completed (+4224) 00:09:39.160 00:09:40.535 QEMU NVMe Ctrl (12340 ): 24870 I/Os completed (+4294) 00:09:40.535 QEMU NVMe Ctrl (12341 ): 24603 I/Os completed (+4224) 00:09:40.535 00:09:41.472 QEMU NVMe Ctrl (12340 ): 28530 I/Os completed (+3660) 00:09:41.472 QEMU NVMe Ctrl (12341 ): 28272 I/Os completed (+3669) 00:09:41.472 00:09:42.406 QEMU NVMe Ctrl (12340 ): 32569 I/Os completed (+4039) 00:09:42.406 QEMU NVMe Ctrl (12341 ): 32284 I/Os completed (+4012) 00:09:42.406 00:09:43.339 QEMU NVMe Ctrl (12340 ): 36561 I/Os completed (+3992) 00:09:43.339 QEMU NVMe Ctrl (12341 ): 36303 I/Os completed (+4019) 00:09:43.339 00:09:44.276 QEMU NVMe Ctrl (12340 ): 40776 I/Os completed (+4215) 00:09:44.276 QEMU NVMe Ctrl (12341 ): 40638 I/Os completed (+4335) 00:09:44.276 00:09:45.218 QEMU NVMe Ctrl (12340 ): 45040 I/Os completed (+4264) 00:09:45.218 QEMU NVMe Ctrl (12341 ): 44889 I/Os completed (+4251) 00:09:45.218 00:09:46.160 QEMU NVMe Ctrl (12340 ): 49292 I/Os completed (+4252) 00:09:46.160 QEMU NVMe Ctrl (12341 ): 49144 I/Os completed (+4255) 00:09:46.160 00:09:46.422 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:46.422 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:46.422 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.422 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.422 [2024-11-19 23:22:32.419986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:46.422 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:46.422 [2024-11-19 23:22:32.420795] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.420830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.420843] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.420857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:46.422 [2024-11-19 23:22:32.421897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.421925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.421936] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.421947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:46.422 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:46.422 [2024-11-19 23:22:32.441211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:46.422 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:46.422 [2024-11-19 23:22:32.441943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.441971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.441984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.441997] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:46.422 [2024-11-19 23:22:32.442799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.442822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.422 [2024-11-19 23:22:32.442835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.423 [2024-11-19 23:22:32.442844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:46.423 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:46.423 Attaching to 0000:00:10.0 00:09:46.423 Attached to 0000:00:10.0 00:09:46.685 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:46.685 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:46.685 23:22:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:46.685 Attaching to 0000:00:11.0 00:09:46.685 Attached to 0000:00:11.0 00:09:47.258 QEMU NVMe Ctrl (12340 ): 3022 I/Os completed (+3022) 00:09:47.258 QEMU NVMe Ctrl (12341 ): 2702 I/Os completed (+2702) 00:09:47.258 00:09:48.203 QEMU NVMe Ctrl (12340 ): 7209 I/Os completed (+4187) 00:09:48.203 QEMU NVMe Ctrl (12341 ): 6891 I/Os completed (+4189) 00:09:48.203 00:09:49.142 QEMU NVMe Ctrl (12340 ): 11422 I/Os completed (+4213) 00:09:49.142 QEMU NVMe Ctrl (12341 ): 11092 I/Os completed (+4201) 00:09:49.142 00:09:50.528 QEMU NVMe Ctrl (12340 ): 16002 I/Os completed (+4580) 00:09:50.528 QEMU NVMe Ctrl (12341 ): 15669 I/Os completed (+4577) 00:09:50.528 00:09:51.473 QEMU NVMe Ctrl (12340 ): 19709 I/Os completed (+3707) 00:09:51.473 QEMU NVMe Ctrl (12341 ): 19280 I/Os completed (+3611) 00:09:51.473 00:09:52.417 QEMU NVMe Ctrl (12340 ): 22749 I/Os completed (+3040) 00:09:52.417 QEMU NVMe Ctrl (12341 ): 22338 I/Os completed (+3058) 00:09:52.417 00:09:53.359 QEMU NVMe Ctrl (12340 ): 26932 I/Os completed (+4183) 00:09:53.359 QEMU NVMe Ctrl (12341 ): 26510 I/Os completed (+4172) 00:09:53.359 00:09:54.301 QEMU NVMe Ctrl (12340 ): 31418 I/Os completed (+4486) 00:09:54.301 QEMU NVMe Ctrl (12341 ): 31001 I/Os completed (+4491) 00:09:54.301 00:09:55.242 QEMU NVMe Ctrl (12340 ): 35600 I/Os completed (+4182) 00:09:55.242 QEMU NVMe Ctrl (12341 ): 35192 I/Os completed (+4191) 00:09:55.242 00:09:56.182 QEMU NVMe Ctrl (12340 ): 39742 I/Os completed (+4142) 00:09:56.182 QEMU NVMe Ctrl (12341 ): 39339 I/Os completed (+4147) 00:09:56.182 00:09:57.125 QEMU NVMe Ctrl (12340 ): 43205 I/Os completed (+3463) 00:09:57.125 QEMU NVMe Ctrl (12341 ): 42831 I/Os completed (+3492) 00:09:57.125 00:09:58.511 QEMU NVMe Ctrl (12340 ): 46229 I/Os completed (+3024) 00:09:58.511 QEMU NVMe Ctrl (12341 ): 45852 I/Os completed (+3021) 00:09:58.511 00:09:58.511 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:58.511 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:58.511 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:58.511 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:58.511 [2024-11-19 23:22:44.676369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:58.511 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:58.511 [2024-11-19 23:22:44.677281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.677328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.677343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.677359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:58.511 [2024-11-19 23:22:44.678611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.678650] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.678663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.678675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:58.511 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:58.511 [2024-11-19 23:22:44.693757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:58.511 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:58.511 [2024-11-19 23:22:44.694496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.694530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.694545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.694557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:58.511 [2024-11-19 23:22:44.695442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.695476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.695490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.511 [2024-11-19 23:22:44.695501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:58.773 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:58.773 EAL: Scan for (pci) bus failed. 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:58.773 Attaching to 0000:00:10.0 00:09:58.773 Attached to 0000:00:10.0 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:58.773 23:22:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:58.773 Attaching to 0000:00:11.0 00:09:58.773 Attached to 0000:00:11.0 00:09:58.773 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:58.773 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:58.773 [2024-11-19 23:22:44.941314] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:11.015 23:22:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:11.015 23:22:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.015 23:22:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:10:11.015 23:22:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:10:11.015 23:22:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:11.015 23:22:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:10:11.015 23:22:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:10:11.015 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 23:22:56 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78452 00:10:17.603 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78452) - No such process 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78452 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78997 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78997 00:10:17.603 23:23:02 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78997 ']' 00:10:17.603 23:23:02 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.603 23:23:02 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.603 23:23:02 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.603 23:23:02 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.603 23:23:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.603 23:23:02 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.603 [2024-11-19 23:23:03.041701] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:10:17.603 [2024-11-19 23:23:03.041871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78997 ] 00:10:17.603 [2024-11-19 23:23:03.200773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.603 [2024-11-19 23:23:03.229329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:17.866 23:23:03 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:17.866 23:23:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.500 23:23:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.500 23:23:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.500 23:23:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:24.500 23:23:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:24.500 [2024-11-19 23:23:09.988201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.500 [2024-11-19 23:23:09.989272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:09.989307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:09.989320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:09.989333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:09.989341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:09.989348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:09.989358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:09.989365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:09.989372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:09.989379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:09.989386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:09.989392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:10.388201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.500 [2024-11-19 23:23:10.389268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:10.389301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:10.389311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:10.389325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:10.389332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:10.389340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:10.389347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:10.389355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:10.389361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 [2024-11-19 23:23:10.389370] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.500 [2024-11-19 23:23:10.389376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:24.500 [2024-11-19 23:23:10.389384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:24.500 23:23:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.500 23:23:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:24.500 23:23:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.500 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.501 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.761 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.761 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.761 23:23:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.994 23:23:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.994 23:23:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.994 23:23:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.994 [2024-11-19 23:23:22.788418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.994 [2024-11-19 23:23:22.789808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.994 [2024-11-19 23:23:22.789845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.994 [2024-11-19 23:23:22.789860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.994 [2024-11-19 23:23:22.789877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.994 [2024-11-19 23:23:22.789887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.994 [2024-11-19 23:23:22.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.994 [2024-11-19 23:23:22.789903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.994 [2024-11-19 23:23:22.789910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.994 [2024-11-19 23:23:22.789918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.994 [2024-11-19 23:23:22.789924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.994 [2024-11-19 23:23:22.789932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:36.994 [2024-11-19 23:23:22.789939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:36.994 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:36.994 23:23:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.995 23:23:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:36.995 23:23:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.995 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:36.995 23:23:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.254 23:23:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.254 23:23:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.254 23:23:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:37.254 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:37.254 [2024-11-19 23:23:23.388421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:37.254 [2024-11-19 23:23:23.389520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.254 [2024-11-19 23:23:23.389555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.254 [2024-11-19 23:23:23.389566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.254 [2024-11-19 23:23:23.389578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.254 [2024-11-19 23:23:23.389585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.254 [2024-11-19 23:23:23.389594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.254 [2024-11-19 23:23:23.389600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.254 [2024-11-19 23:23:23.389608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.254 [2024-11-19 23:23:23.389615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.254 [2024-11-19 23:23:23.389623] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:37.254 [2024-11-19 23:23:23.389630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.254 [2024-11-19 23:23:23.389638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:37.821 23:23:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.821 23:23:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:37.821 23:23:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:37.821 23:23:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:37.821 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.821 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.821 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.079 23:23:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.302 23:23:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.302 23:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 23:23:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.302 23:23:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.302 23:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 23:23:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.302 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.302 [2024-11-19 23:23:36.288644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:50.302 [2024-11-19 23:23:36.289680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.302 [2024-11-19 23:23:36.289715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.302 [2024-11-19 23:23:36.289741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.302 [2024-11-19 23:23:36.289754] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.302 [2024-11-19 23:23:36.289762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.302 [2024-11-19 23:23:36.289769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.302 [2024-11-19 23:23:36.289777] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.302 [2024-11-19 23:23:36.289783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.302 [2024-11-19 23:23:36.289791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.302 [2024-11-19 23:23:36.289797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.302 [2024-11-19 23:23:36.289807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.302 [2024-11-19 23:23:36.289813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.874 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.874 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.874 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.874 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.875 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.875 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.875 23:23:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.875 23:23:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.875 [2024-11-19 23:23:36.788651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:50.875 [2024-11-19 23:23:36.789649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.875 [2024-11-19 23:23:36.789682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.875 [2024-11-19 23:23:36.789693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.875 [2024-11-19 23:23:36.789705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.875 [2024-11-19 23:23:36.789712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.875 [2024-11-19 23:23:36.789722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.875 [2024-11-19 23:23:36.789737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.875 [2024-11-19 23:23:36.789746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.875 [2024-11-19 23:23:36.789753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.875 [2024-11-19 23:23:36.789763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.875 [2024-11-19 23:23:36.789770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.875 [2024-11-19 23:23:36.789777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.875 23:23:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.875 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:50.875 23:23:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.136 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:51.136 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.136 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.136 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.136 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.136 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.136 23:23:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.136 23:23:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.403 23:23:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.403 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.667 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.667 23:23:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.74 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.74 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.74 00:11:03.917 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.74 2 00:11:03.917 remove_attach_helper took 45.74s to complete (handling 2 nvme drive(s)) 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:03.917 23:23:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:03.918 23:23:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:03.918 23:23:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.500 23:23:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.500 23:23:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.500 23:23:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:10.500 23:23:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.500 [2024-11-19 23:23:55.759991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:10.500 [2024-11-19 23:23:55.761029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.500 [2024-11-19 23:23:55.761058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.500 [2024-11-19 23:23:55.761069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.500 [2024-11-19 23:23:55.761081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.500 [2024-11-19 23:23:55.761089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.500 [2024-11-19 23:23:55.761096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.500 [2024-11-19 23:23:55.761104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.500 [2024-11-19 23:23:55.761110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.500 [2024-11-19 23:23:55.761120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.500 [2024-11-19 23:23:55.761126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.500 [2024-11-19 23:23:55.761134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.500 [2024-11-19 23:23:55.761140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.500 23:23:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.500 23:23:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.500 23:23:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:10.500 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:10.501 [2024-11-19 23:23:56.359992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:10.501 [2024-11-19 23:23:56.360722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.501 [2024-11-19 23:23:56.360765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.501 [2024-11-19 23:23:56.360774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.501 [2024-11-19 23:23:56.360786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.501 [2024-11-19 23:23:56.360794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.501 [2024-11-19 23:23:56.360803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.501 [2024-11-19 23:23:56.360809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.501 [2024-11-19 23:23:56.360817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.501 [2024-11-19 23:23:56.360823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.501 [2024-11-19 23:23:56.360831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:10.501 [2024-11-19 23:23:56.360837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:10.501 [2024-11-19 23:23:56.360847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:10.761 23:23:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.761 23:23:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:10.761 23:23:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:10.761 23:23:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.022 23:23:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.253 23:24:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.253 23:24:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.253 23:24:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.253 [2024-11-19 23:24:09.160233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.253 [2024-11-19 23:24:09.161090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.253 [2024-11-19 23:24:09.161122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.253 [2024-11-19 23:24:09.161134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.253 [2024-11-19 23:24:09.161146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.253 [2024-11-19 23:24:09.161155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.253 [2024-11-19 23:24:09.161162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.253 [2024-11-19 23:24:09.161170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.253 [2024-11-19 23:24:09.161177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.253 [2024-11-19 23:24:09.161185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.253 [2024-11-19 23:24:09.161191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.253 [2024-11-19 23:24:09.161199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.253 [2024-11-19 23:24:09.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.253 23:24:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.253 23:24:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.253 23:24:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.253 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.825 23:24:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.825 23:24:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.825 23:24:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.825 [2024-11-19 23:24:09.760238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.825 [2024-11-19 23:24:09.760965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.825 [2024-11-19 23:24:09.760996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.825 [2024-11-19 23:24:09.761006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.825 [2024-11-19 23:24:09.761018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.825 [2024-11-19 23:24:09.761025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.825 [2024-11-19 23:24:09.761033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.825 [2024-11-19 23:24:09.761040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.825 [2024-11-19 23:24:09.761047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.825 [2024-11-19 23:24:09.761054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.825 [2024-11-19 23:24:09.761061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.825 [2024-11-19 23:24:09.761068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.825 [2024-11-19 23:24:09.761075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.825 23:24:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.086 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:24.086 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.086 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.086 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.086 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.086 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.086 23:24:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.086 23:24:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.348 23:24:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:24.348 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:24.609 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.609 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.609 23:24:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.840 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.840 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.840 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.840 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.840 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.840 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.840 23:24:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.840 23:24:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.841 23:24:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.841 23:24:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.841 23:24:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.841 23:24:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.841 23:24:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.841 [2024-11-19 23:24:22.660472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.841 [2024-11-19 23:24:22.661222] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.841 [2024-11-19 23:24:22.661248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.841 [2024-11-19 23:24:22.661261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.841 [2024-11-19 23:24:22.661273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.841 [2024-11-19 23:24:22.661284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.841 [2024-11-19 23:24:22.661291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.841 [2024-11-19 23:24:22.661299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.841 [2024-11-19 23:24:22.661305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.841 [2024-11-19 23:24:22.661313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.841 [2024-11-19 23:24:22.661319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.841 [2024-11-19 23:24:22.661327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.841 [2024-11-19 23:24:22.661334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.102 [2024-11-19 23:24:23.060471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:37.102 [2024-11-19 23:24:23.061185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.102 [2024-11-19 23:24:23.061214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.102 [2024-11-19 23:24:23.061224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.102 [2024-11-19 23:24:23.061234] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.102 [2024-11-19 23:24:23.061241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.102 [2024-11-19 23:24:23.061250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.102 [2024-11-19 23:24:23.061257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.102 [2024-11-19 23:24:23.061266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.102 [2024-11-19 23:24:23.061273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.102 [2024-11-19 23:24:23.061281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:37.102 [2024-11-19 23:24:23.061287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.102 [2024-11-19 23:24:23.061295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:37.102 23:24:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.102 23:24:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:37.102 23:24:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.102 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:37.363 23:24:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.80 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.80 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.80 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.80 2 00:11:49.598 remove_attach_helper took 45.80s to complete (handling 2 nvme drive(s)) 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78997 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78997 ']' 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78997 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78997 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.598 killing process with pid 78997 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78997' 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78997 00:11:49.598 23:24:35 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78997 00:11:49.598 23:24:35 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:50.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:50.431 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:50.431 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:50.431 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:50.693 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:50.693 00:11:50.693 real 2m30.107s 00:11:50.693 user 1m50.224s 00:11:50.693 sys 0m18.560s 00:11:50.693 ************************************ 00:11:50.693 23:24:36 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.693 23:24:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 END TEST sw_hotplug 00:11:50.693 ************************************ 00:11:50.693 23:24:36 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:50.693 23:24:36 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:50.693 23:24:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.693 23:24:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.693 23:24:36 -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 ************************************ 00:11:50.693 START TEST nvme_xnvme 00:11:50.693 ************************************ 00:11:50.693 23:24:36 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:50.693 * Looking for test storage... 00:11:50.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:50.693 23:24:36 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:50.693 23:24:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:50.693 23:24:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.955 --rc genhtml_branch_coverage=1 00:11:50.955 --rc genhtml_function_coverage=1 00:11:50.955 --rc genhtml_legend=1 00:11:50.955 --rc geninfo_all_blocks=1 00:11:50.955 --rc geninfo_unexecuted_blocks=1 00:11:50.955 00:11:50.955 ' 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.955 --rc genhtml_branch_coverage=1 00:11:50.955 --rc genhtml_function_coverage=1 00:11:50.955 --rc genhtml_legend=1 00:11:50.955 --rc geninfo_all_blocks=1 00:11:50.955 --rc geninfo_unexecuted_blocks=1 00:11:50.955 00:11:50.955 ' 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.955 --rc genhtml_branch_coverage=1 00:11:50.955 --rc genhtml_function_coverage=1 00:11:50.955 --rc genhtml_legend=1 00:11:50.955 --rc geninfo_all_blocks=1 00:11:50.955 --rc geninfo_unexecuted_blocks=1 00:11:50.955 00:11:50.955 ' 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.955 --rc genhtml_branch_coverage=1 00:11:50.955 --rc genhtml_function_coverage=1 00:11:50.955 --rc genhtml_legend=1 00:11:50.955 --rc geninfo_all_blocks=1 00:11:50.955 --rc geninfo_unexecuted_blocks=1 00:11:50.955 00:11:50.955 ' 00:11:50.955 23:24:36 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.955 23:24:36 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.955 23:24:36 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.955 23:24:36 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.955 23:24:36 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.955 23:24:36 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:50.955 23:24:36 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.955 23:24:36 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.955 23:24:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:50.955 ************************************ 00:11:50.955 START TEST xnvme_to_malloc_dd_copy 00:11:50.955 ************************************ 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:50.955 23:24:36 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:50.955 { 00:11:50.955 "subsystems": [ 00:11:50.955 { 00:11:50.955 "subsystem": "bdev", 00:11:50.955 "config": [ 00:11:50.955 { 00:11:50.955 "params": { 00:11:50.955 "block_size": 512, 00:11:50.955 "num_blocks": 2097152, 00:11:50.955 "name": "malloc0" 00:11:50.955 }, 00:11:50.955 "method": "bdev_malloc_create" 00:11:50.955 }, 00:11:50.955 { 00:11:50.955 "params": { 00:11:50.955 "io_mechanism": "libaio", 00:11:50.955 "filename": "/dev/nullb0", 00:11:50.955 "name": "null0" 00:11:50.955 }, 00:11:50.955 "method": "bdev_xnvme_create" 00:11:50.955 }, 00:11:50.955 { 00:11:50.955 "method": "bdev_wait_for_examine" 00:11:50.955 } 00:11:50.955 ] 00:11:50.955 } 00:11:50.955 ] 00:11:50.955 } 00:11:50.955 [2024-11-19 23:24:37.040291] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:11:50.955 [2024-11-19 23:24:37.040438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80391 ] 00:11:51.216 [2024-11-19 23:24:37.200397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.216 [2024-11-19 23:24:37.229724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.618  [2024-11-19T23:24:39.756Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-19T23:24:40.698Z] Copying: 448/1024 [MB] (225 MBps) [2024-11-19T23:24:41.639Z] Copying: 729/1024 [MB] (281 MBps) [2024-11-19T23:24:41.899Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:11:55.707 00:11:55.707 23:24:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:55.707 23:24:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:55.707 23:24:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:55.707 23:24:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:55.971 { 00:11:55.971 "subsystems": [ 00:11:55.971 { 00:11:55.971 "subsystem": "bdev", 00:11:55.971 "config": [ 00:11:55.971 { 00:11:55.971 "params": { 00:11:55.971 "block_size": 512, 00:11:55.971 "num_blocks": 2097152, 00:11:55.971 "name": "malloc0" 00:11:55.971 }, 00:11:55.971 "method": "bdev_malloc_create" 00:11:55.971 }, 00:11:55.971 { 00:11:55.971 "params": { 00:11:55.971 "io_mechanism": "libaio", 00:11:55.971 "filename": "/dev/nullb0", 00:11:55.971 "name": "null0" 00:11:55.971 }, 00:11:55.971 "method": "bdev_xnvme_create" 00:11:55.971 }, 00:11:55.971 { 00:11:55.971 "method": "bdev_wait_for_examine" 00:11:55.971 } 00:11:55.971 ] 00:11:55.971 } 00:11:55.971 ] 00:11:55.971 } 00:11:55.971 [2024-11-19 23:24:41.939953] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:11:55.971 [2024-11-19 23:24:41.940071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80457 ] 00:11:55.971 [2024-11-19 23:24:42.094229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.971 [2024-11-19 23:24:42.118465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.356  [2024-11-19T23:24:44.490Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-19T23:24:45.433Z] Copying: 625/1024 [MB] (313 MBps) [2024-11-19T23:24:45.694Z] Copying: 938/1024 [MB] (313 MBps) [2024-11-19T23:24:45.955Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:11:59.763 00:12:00.024 23:24:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:00.024 23:24:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:00.024 23:24:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:00.024 23:24:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:00.024 23:24:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:00.024 23:24:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:00.024 { 00:12:00.024 "subsystems": [ 00:12:00.024 { 00:12:00.024 "subsystem": "bdev", 00:12:00.024 "config": [ 00:12:00.024 { 00:12:00.024 "params": { 00:12:00.024 "block_size": 512, 00:12:00.024 "num_blocks": 2097152, 00:12:00.024 "name": "malloc0" 00:12:00.024 }, 00:12:00.024 "method": "bdev_malloc_create" 00:12:00.024 }, 00:12:00.024 { 00:12:00.024 "params": { 00:12:00.024 "io_mechanism": "io_uring", 00:12:00.024 "filename": "/dev/nullb0", 00:12:00.024 "name": "null0" 00:12:00.024 }, 00:12:00.024 "method": "bdev_xnvme_create" 00:12:00.024 }, 00:12:00.024 { 00:12:00.024 "method": "bdev_wait_for_examine" 00:12:00.024 } 00:12:00.024 ] 00:12:00.024 } 00:12:00.024 ] 00:12:00.024 } 00:12:00.024 [2024-11-19 23:24:46.022913] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:00.025 [2024-11-19 23:24:46.023024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80511 ] 00:12:00.025 [2024-11-19 23:24:46.177471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.025 [2024-11-19 23:24:46.198200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.408  [2024-11-19T23:24:48.542Z] Copying: 318/1024 [MB] (318 MBps) [2024-11-19T23:24:49.484Z] Copying: 637/1024 [MB] (319 MBps) [2024-11-19T23:24:49.745Z] Copying: 956/1024 [MB] (318 MBps) [2024-11-19T23:24:50.006Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:03.814 00:12:03.814 23:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:03.814 23:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:03.814 23:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:03.814 23:24:49 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:03.814 { 00:12:03.814 "subsystems": [ 00:12:03.814 { 00:12:03.814 "subsystem": "bdev", 00:12:03.814 "config": [ 00:12:03.814 { 00:12:03.814 "params": { 00:12:03.814 "block_size": 512, 00:12:03.814 "num_blocks": 2097152, 00:12:03.814 "name": "malloc0" 00:12:03.814 }, 00:12:03.814 "method": "bdev_malloc_create" 00:12:03.814 }, 00:12:03.814 { 00:12:03.814 "params": { 00:12:03.814 "io_mechanism": "io_uring", 00:12:03.814 "filename": "/dev/nullb0", 00:12:03.814 "name": "null0" 00:12:03.814 }, 00:12:03.814 "method": "bdev_xnvme_create" 00:12:03.814 }, 00:12:03.814 { 00:12:03.814 "method": "bdev_wait_for_examine" 00:12:03.814 } 00:12:03.814 ] 00:12:03.814 } 00:12:03.814 ] 00:12:03.814 } 00:12:03.814 [2024-11-19 23:24:49.994065] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:03.814 [2024-11-19 23:24:49.994169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80560 ] 00:12:04.075 [2024-11-19 23:24:50.149558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.075 [2024-11-19 23:24:50.167216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.460  [2024-11-19T23:24:52.593Z] Copying: 322/1024 [MB] (322 MBps) [2024-11-19T23:24:53.536Z] Copying: 646/1024 [MB] (323 MBps) [2024-11-19T23:24:53.797Z] Copying: 970/1024 [MB] (324 MBps) [2024-11-19T23:24:54.059Z] Copying: 1024/1024 [MB] (average 323 MBps) 00:12:07.867 00:12:07.867 23:24:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:07.867 23:24:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:07.867 00:12:07.867 real 0m16.959s 00:12:07.867 user 0m13.928s 00:12:07.867 sys 0m2.543s 00:12:07.867 23:24:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.867 23:24:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:07.867 ************************************ 00:12:07.867 END TEST xnvme_to_malloc_dd_copy 00:12:07.867 ************************************ 00:12:07.867 23:24:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:07.867 23:24:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.867 23:24:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.867 23:24:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.867 ************************************ 00:12:07.867 START TEST xnvme_bdevperf 00:12:07.867 ************************************ 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:07.867 23:24:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:07.867 { 00:12:07.867 "subsystems": [ 00:12:07.867 { 00:12:07.867 "subsystem": "bdev", 00:12:07.867 "config": [ 00:12:07.867 { 00:12:07.867 "params": { 00:12:07.867 "io_mechanism": "libaio", 00:12:07.867 "filename": "/dev/nullb0", 00:12:07.867 "name": "null0" 00:12:07.867 }, 00:12:07.867 "method": "bdev_xnvme_create" 00:12:07.867 }, 00:12:07.867 { 00:12:07.867 "method": "bdev_wait_for_examine" 00:12:07.867 } 00:12:07.867 ] 00:12:07.867 } 00:12:07.867 ] 00:12:07.867 } 00:12:07.867 [2024-11-19 23:24:54.015899] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:07.867 [2024-11-19 23:24:54.016000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80637 ] 00:12:08.128 [2024-11-19 23:24:54.154668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.128 [2024-11-19 23:24:54.171506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.128 Running I/O for 5 seconds... 00:12:10.106 208448.00 IOPS, 814.25 MiB/s [2024-11-19T23:24:57.694Z] 208576.00 IOPS, 814.75 MiB/s [2024-11-19T23:24:58.265Z] 208640.00 IOPS, 815.00 MiB/s [2024-11-19T23:24:59.658Z] 208688.00 IOPS, 815.19 MiB/s 00:12:13.466 Latency(us) 00:12:13.466 [2024-11-19T23:24:59.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.466 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:13.466 null0 : 5.00 208670.33 815.12 0.00 0.00 304.62 103.19 1518.67 00:12:13.466 [2024-11-19T23:24:59.658Z] =================================================================================================================== 00:12:13.466 [2024-11-19T23:24:59.658Z] Total : 208670.33 815.12 0.00 0.00 304.62 103.19 1518.67 00:12:13.466 23:24:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:13.466 23:24:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:13.466 23:24:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:13.466 23:24:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:13.466 23:24:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:13.466 23:24:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:13.466 { 00:12:13.466 "subsystems": [ 00:12:13.466 { 00:12:13.466 "subsystem": "bdev", 00:12:13.466 "config": [ 00:12:13.466 { 00:12:13.466 "params": { 00:12:13.466 "io_mechanism": "io_uring", 00:12:13.466 "filename": "/dev/nullb0", 00:12:13.466 "name": "null0" 00:12:13.466 }, 00:12:13.466 "method": "bdev_xnvme_create" 00:12:13.466 }, 00:12:13.466 { 00:12:13.466 "method": "bdev_wait_for_examine" 00:12:13.466 } 00:12:13.466 ] 00:12:13.466 } 00:12:13.466 ] 00:12:13.466 } 00:12:13.466 [2024-11-19 23:24:59.456245] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:13.466 [2024-11-19 23:24:59.456379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80704 ] 00:12:13.466 [2024-11-19 23:24:59.612087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.466 [2024-11-19 23:24:59.637157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.727 Running I/O for 5 seconds... 00:12:15.611 239168.00 IOPS, 934.25 MiB/s [2024-11-19T23:25:02.745Z] 239296.00 IOPS, 934.75 MiB/s [2024-11-19T23:25:04.134Z] 239381.33 IOPS, 935.08 MiB/s [2024-11-19T23:25:05.078Z] 239408.00 IOPS, 935.19 MiB/s 00:12:18.886 Latency(us) 00:12:18.886 [2024-11-19T23:25:05.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.886 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:18.886 null0 : 5.00 239407.54 935.19 0.00 0.00 265.00 141.78 1474.56 00:12:18.886 [2024-11-19T23:25:05.078Z] =================================================================================================================== 00:12:18.886 [2024-11-19T23:25:05.078Z] Total : 239407.54 935.19 0.00 0.00 265.00 141.78 1474.56 00:12:18.886 23:25:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:18.886 23:25:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:18.886 00:12:18.886 real 0m10.924s 00:12:18.886 user 0m8.689s 00:12:18.886 sys 0m2.012s 00:12:18.886 23:25:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.886 ************************************ 00:12:18.886 END TEST xnvme_bdevperf 00:12:18.886 ************************************ 00:12:18.886 23:25:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:18.886 00:12:18.886 real 0m28.160s 00:12:18.886 user 0m22.734s 00:12:18.886 sys 0m4.688s 00:12:18.886 23:25:04 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.886 ************************************ 00:12:18.886 END TEST nvme_xnvme 00:12:18.886 ************************************ 00:12:18.886 23:25:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.886 23:25:04 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:18.886 23:25:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.886 23:25:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.886 23:25:04 -- common/autotest_common.sh@10 -- # set +x 00:12:18.886 ************************************ 00:12:18.886 START TEST blockdev_xnvme 00:12:18.886 ************************************ 00:12:18.886 23:25:04 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:18.886 * Looking for test storage... 00:12:18.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:18.886 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.886 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.886 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.148 23:25:05 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.148 --rc genhtml_branch_coverage=1 00:12:19.148 --rc genhtml_function_coverage=1 00:12:19.148 --rc genhtml_legend=1 00:12:19.148 --rc geninfo_all_blocks=1 00:12:19.148 --rc geninfo_unexecuted_blocks=1 00:12:19.148 00:12:19.148 ' 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.148 --rc genhtml_branch_coverage=1 00:12:19.148 --rc genhtml_function_coverage=1 00:12:19.148 --rc genhtml_legend=1 00:12:19.148 --rc geninfo_all_blocks=1 00:12:19.148 --rc geninfo_unexecuted_blocks=1 00:12:19.148 00:12:19.148 ' 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.148 --rc genhtml_branch_coverage=1 00:12:19.148 --rc genhtml_function_coverage=1 00:12:19.148 --rc genhtml_legend=1 00:12:19.148 --rc geninfo_all_blocks=1 00:12:19.148 --rc geninfo_unexecuted_blocks=1 00:12:19.148 00:12:19.148 ' 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.148 --rc genhtml_branch_coverage=1 00:12:19.148 --rc genhtml_function_coverage=1 00:12:19.148 --rc genhtml_legend=1 00:12:19.148 --rc geninfo_all_blocks=1 00:12:19.148 --rc geninfo_unexecuted_blocks=1 00:12:19.148 00:12:19.148 ' 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80841 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 80841 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 80841 ']' 00:12:19.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.148 23:25:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.148 23:25:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:19.148 [2024-11-19 23:25:05.242965] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:19.148 [2024-11-19 23:25:05.243130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80841 ] 00:12:19.410 [2024-11-19 23:25:05.402394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.410 [2024-11-19 23:25:05.419261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.983 23:25:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.983 23:25:06 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:12:19.983 23:25:06 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:19.983 23:25:06 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:19.983 23:25:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:19.983 23:25:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:19.983 23:25:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:20.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.505 Waiting for block devices as requested 00:12:20.505 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.505 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.505 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.765 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:26.059 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:26.059 nvme0n1 00:12:26.059 nvme1n1 00:12:26.059 nvme2n1 00:12:26.059 nvme2n2 00:12:26.059 nvme2n3 00:12:26.059 nvme3n1 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.059 23:25:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.059 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:26.060 23:25:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c9750a1e-973f-4350-9784-01a9012d2ac1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c9750a1e-973f-4350-9784-01a9012d2ac1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c6f22fcc-cb14-4425-bec3-0099b963a0b2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c6f22fcc-cb14-4425-bec3-0099b963a0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "df31370a-f40d-4d65-8dd8-290fb10c08c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df31370a-f40d-4d65-8dd8-290fb10c08c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "85092311-44b1-4b57-be98-e9b1204d484a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85092311-44b1-4b57-be98-e9b1204d484a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6f0fa0f6-032b-4e01-9fa5-eec239c0caf9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f0fa0f6-032b-4e01-9fa5-eec239c0caf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7cab64bd-3ba3-4328-ab90-83ed0f006812"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7cab64bd-3ba3-4328-ab90-83ed0f006812",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:26.060 23:25:11 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 80841 00:12:26.060 23:25:11 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 80841 ']' 00:12:26.060 23:25:11 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 80841 00:12:26.060 23:25:11 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:12:26.060 23:25:11 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.060 23:25:11 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80841 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.060 killing process with pid 80841 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80841' 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 80841 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 80841 00:12:26.060 23:25:12 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:26.060 23:25:12 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.060 23:25:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.060 ************************************ 00:12:26.060 START TEST bdev_hello_world 00:12:26.060 ************************************ 00:12:26.060 23:25:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:26.321 [2024-11-19 23:25:12.287093] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:26.321 [2024-11-19 23:25:12.287206] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81188 ] 00:12:26.321 [2024-11-19 23:25:12.441330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.321 [2024-11-19 23:25:12.461591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.583 [2024-11-19 23:25:12.619365] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:26.583 [2024-11-19 23:25:12.619404] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:26.583 [2024-11-19 23:25:12.619420] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:26.583 [2024-11-19 23:25:12.620931] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:26.583 [2024-11-19 23:25:12.621227] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:26.583 [2024-11-19 23:25:12.621243] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:26.583 [2024-11-19 23:25:12.621397] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:26.583 00:12:26.583 [2024-11-19 23:25:12.621412] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:26.583 00:12:26.583 real 0m0.505s 00:12:26.583 user 0m0.268s 00:12:26.583 sys 0m0.129s 00:12:26.583 23:25:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.583 23:25:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:26.583 ************************************ 00:12:26.583 END TEST bdev_hello_world 00:12:26.583 ************************************ 00:12:26.844 23:25:12 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:26.844 23:25:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.844 23:25:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.844 23:25:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.844 ************************************ 00:12:26.844 START TEST bdev_bounds 00:12:26.844 ************************************ 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81213 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:26.844 Process bdevio pid: 81213 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81213' 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81213 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 81213 ']' 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:26.844 23:25:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:26.844 [2024-11-19 23:25:12.837727] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:26.844 [2024-11-19 23:25:12.837833] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81213 ] 00:12:26.844 [2024-11-19 23:25:12.984657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.844 [2024-11-19 23:25:13.004584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.844 [2024-11-19 23:25:13.004929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.844 [2024-11-19 23:25:13.005012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.788 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.788 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:12:27.788 23:25:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:27.788 I/O targets: 00:12:27.788 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:27.788 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:27.788 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:27.788 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:27.788 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:27.788 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:27.788 00:12:27.788 00:12:27.788 CUnit - A unit testing framework for C - Version 2.1-3 00:12:27.788 http://cunit.sourceforge.net/ 00:12:27.788 00:12:27.788 00:12:27.788 Suite: bdevio tests on: nvme3n1 00:12:27.788 Test: blockdev write read block ...passed 00:12:27.788 Test: blockdev write zeroes read block ...passed 00:12:27.788 Test: blockdev write zeroes read no split ...passed 00:12:27.788 Test: blockdev write zeroes read split ...passed 00:12:27.788 Test: blockdev write zeroes read split partial ...passed 00:12:27.789 Test: blockdev reset ...passed 00:12:27.789 Test: blockdev write read 8 blocks ...passed 00:12:27.789 Test: blockdev write read size > 128k ...passed 00:12:27.789 Test: blockdev write read invalid size ...passed 00:12:27.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:27.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:27.789 Test: blockdev write read max offset ...passed 00:12:27.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:27.789 Test: blockdev writev readv 8 blocks ...passed 00:12:27.789 Test: blockdev writev readv 30 x 1block ...passed 00:12:27.789 Test: blockdev writev readv block ...passed 00:12:27.789 Test: blockdev writev readv size > 128k ...passed 00:12:27.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:27.789 Test: blockdev comparev and writev ...passed 00:12:27.789 Test: blockdev nvme passthru rw ...passed 00:12:27.789 Test: blockdev nvme passthru vendor specific ...passed 00:12:27.789 Test: blockdev nvme admin passthru ...passed 00:12:27.789 Test: blockdev copy ...passed 00:12:27.789 Suite: bdevio tests on: nvme2n3 00:12:27.789 Test: blockdev write read block ...passed 00:12:27.789 Test: blockdev write zeroes read block ...passed 00:12:27.789 Test: blockdev write zeroes read no split ...passed 00:12:27.789 Test: blockdev write zeroes read split ...passed 00:12:27.789 Test: blockdev write zeroes read split partial ...passed 00:12:27.789 Test: blockdev reset ...passed 00:12:27.789 Test: blockdev write read 8 blocks ...passed 00:12:27.789 Test: blockdev write read size > 128k ...passed 00:12:27.789 Test: blockdev write read invalid size ...passed 00:12:27.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:27.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:27.789 Test: blockdev write read max offset ...passed 00:12:27.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:27.789 Test: blockdev writev readv 8 blocks ...passed 00:12:27.789 Test: blockdev writev readv 30 x 1block ...passed 00:12:27.789 Test: blockdev writev readv block ...passed 00:12:27.789 Test: blockdev writev readv size > 128k ...passed 00:12:27.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:27.789 Test: blockdev comparev and writev ...passed 00:12:27.789 Test: blockdev nvme passthru rw ...passed 00:12:27.789 Test: blockdev nvme passthru vendor specific ...passed 00:12:27.789 Test: blockdev nvme admin passthru ...passed 00:12:27.789 Test: blockdev copy ...passed 00:12:27.789 Suite: bdevio tests on: nvme2n2 00:12:27.789 Test: blockdev write read block ...passed 00:12:27.789 Test: blockdev write zeroes read block ...passed 00:12:27.789 Test: blockdev write zeroes read no split ...passed 00:12:27.789 Test: blockdev write zeroes read split ...passed 00:12:27.789 Test: blockdev write zeroes read split partial ...passed 00:12:27.789 Test: blockdev reset ...passed 00:12:27.789 Test: blockdev write read 8 blocks ...passed 00:12:27.789 Test: blockdev write read size > 128k ...passed 00:12:27.789 Test: blockdev write read invalid size ...passed 00:12:27.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:27.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:27.789 Test: blockdev write read max offset ...passed 00:12:27.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:27.789 Test: blockdev writev readv 8 blocks ...passed 00:12:27.789 Test: blockdev writev readv 30 x 1block ...passed 00:12:27.789 Test: blockdev writev readv block ...passed 00:12:27.789 Test: blockdev writev readv size > 128k ...passed 00:12:27.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:27.789 Test: blockdev comparev and writev ...passed 00:12:27.789 Test: blockdev nvme passthru rw ...passed 00:12:27.789 Test: blockdev nvme passthru vendor specific ...passed 00:12:27.789 Test: blockdev nvme admin passthru ...passed 00:12:27.789 Test: blockdev copy ...passed 00:12:27.789 Suite: bdevio tests on: nvme2n1 00:12:27.789 Test: blockdev write read block ...passed 00:12:27.789 Test: blockdev write zeroes read block ...passed 00:12:27.789 Test: blockdev write zeroes read no split ...passed 00:12:27.789 Test: blockdev write zeroes read split ...passed 00:12:27.789 Test: blockdev write zeroes read split partial ...passed 00:12:27.789 Test: blockdev reset ...passed 00:12:27.789 Test: blockdev write read 8 blocks ...passed 00:12:27.789 Test: blockdev write read size > 128k ...passed 00:12:27.789 Test: blockdev write read invalid size ...passed 00:12:27.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:27.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:27.789 Test: blockdev write read max offset ...passed 00:12:27.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:27.789 Test: blockdev writev readv 8 blocks ...passed 00:12:27.789 Test: blockdev writev readv 30 x 1block ...passed 00:12:27.789 Test: blockdev writev readv block ...passed 00:12:27.789 Test: blockdev writev readv size > 128k ...passed 00:12:27.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:27.789 Test: blockdev comparev and writev ...passed 00:12:27.789 Test: blockdev nvme passthru rw ...passed 00:12:27.789 Test: blockdev nvme passthru vendor specific ...passed 00:12:27.789 Test: blockdev nvme admin passthru ...passed 00:12:27.789 Test: blockdev copy ...passed 00:12:27.789 Suite: bdevio tests on: nvme1n1 00:12:27.789 Test: blockdev write read block ...passed 00:12:27.789 Test: blockdev write zeroes read block ...passed 00:12:27.789 Test: blockdev write zeroes read no split ...passed 00:12:27.789 Test: blockdev write zeroes read split ...passed 00:12:27.789 Test: blockdev write zeroes read split partial ...passed 00:12:27.789 Test: blockdev reset ...passed 00:12:27.789 Test: blockdev write read 8 blocks ...passed 00:12:27.789 Test: blockdev write read size > 128k ...passed 00:12:27.789 Test: blockdev write read invalid size ...passed 00:12:27.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:27.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:27.789 Test: blockdev write read max offset ...passed 00:12:27.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:27.789 Test: blockdev writev readv 8 blocks ...passed 00:12:27.789 Test: blockdev writev readv 30 x 1block ...passed 00:12:27.789 Test: blockdev writev readv block ...passed 00:12:27.789 Test: blockdev writev readv size > 128k ...passed 00:12:27.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:27.789 Test: blockdev comparev and writev ...passed 00:12:27.789 Test: blockdev nvme passthru rw ...passed 00:12:27.789 Test: blockdev nvme passthru vendor specific ...passed 00:12:27.789 Test: blockdev nvme admin passthru ...passed 00:12:27.789 Test: blockdev copy ...passed 00:12:27.789 Suite: bdevio tests on: nvme0n1 00:12:27.789 Test: blockdev write read block ...passed 00:12:27.789 Test: blockdev write zeroes read block ...passed 00:12:27.789 Test: blockdev write zeroes read no split ...passed 00:12:27.789 Test: blockdev write zeroes read split ...passed 00:12:27.789 Test: blockdev write zeroes read split partial ...passed 00:12:27.789 Test: blockdev reset ...passed 00:12:27.789 Test: blockdev write read 8 blocks ...passed 00:12:27.789 Test: blockdev write read size > 128k ...passed 00:12:27.789 Test: blockdev write read invalid size ...passed 00:12:27.789 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:27.789 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:27.789 Test: blockdev write read max offset ...passed 00:12:27.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:27.789 Test: blockdev writev readv 8 blocks ...passed 00:12:27.789 Test: blockdev writev readv 30 x 1block ...passed 00:12:27.789 Test: blockdev writev readv block ...passed 00:12:27.789 Test: blockdev writev readv size > 128k ...passed 00:12:27.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:27.789 Test: blockdev comparev and writev ...passed 00:12:27.789 Test: blockdev nvme passthru rw ...passed 00:12:27.789 Test: blockdev nvme passthru vendor specific ...passed 00:12:27.789 Test: blockdev nvme admin passthru ...passed 00:12:27.790 Test: blockdev copy ...passed 00:12:27.790 00:12:27.790 Run Summary: Type Total Ran Passed Failed Inactive 00:12:27.790 suites 6 6 n/a 0 0 00:12:27.790 tests 138 138 138 0 0 00:12:27.790 asserts 780 780 780 0 n/a 00:12:27.790 00:12:27.790 Elapsed time = 0.383 seconds 00:12:27.790 0 00:12:27.790 23:25:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81213 00:12:27.790 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 81213 ']' 00:12:27.790 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 81213 00:12:27.790 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:12:27.790 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.790 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81213 00:12:28.066 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.066 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.066 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81213' 00:12:28.066 killing process with pid 81213 00:12:28.066 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 81213 00:12:28.066 23:25:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 81213 00:12:28.066 23:25:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:28.066 00:12:28.066 real 0m1.332s 00:12:28.066 user 0m3.425s 00:12:28.066 sys 0m0.254s 00:12:28.066 23:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.066 23:25:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:28.066 ************************************ 00:12:28.066 END TEST bdev_bounds 00:12:28.066 ************************************ 00:12:28.066 23:25:14 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:28.066 23:25:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.066 23:25:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.067 23:25:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.067 ************************************ 00:12:28.067 START TEST bdev_nbd 00:12:28.067 ************************************ 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:28.067 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81262 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81262 /var/tmp/spdk-nbd.sock 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 81262 ']' 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:28.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.068 23:25:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:28.068 [2024-11-19 23:25:14.234395] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:28.068 [2024-11-19 23:25:14.234642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.334 [2024-11-19 23:25:14.389766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.334 [2024-11-19 23:25:14.408350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:28.906 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.168 1+0 records in 00:12:29.168 1+0 records out 00:12:29.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516649 s, 7.9 MB/s 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:29.168 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.429 1+0 records in 00:12:29.429 1+0 records out 00:12:29.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692729 s, 5.9 MB/s 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:29.429 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.690 1+0 records in 00:12:29.690 1+0 records out 00:12:29.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478707 s, 8.6 MB/s 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:29.690 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:29.952 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:29.952 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:29.952 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:29.952 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.953 1+0 records in 00:12:29.953 1+0 records out 00:12:29.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436227 s, 9.4 MB/s 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:29.953 23:25:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.214 1+0 records in 00:12:30.214 1+0 records out 00:12:30.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062613 s, 6.5 MB/s 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:30.214 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.476 1+0 records in 00:12:30.476 1+0 records out 00:12:30.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289184 s, 14.2 MB/s 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd0", 00:12:30.476 "bdev_name": "nvme0n1" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd1", 00:12:30.476 "bdev_name": "nvme1n1" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd2", 00:12:30.476 "bdev_name": "nvme2n1" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd3", 00:12:30.476 "bdev_name": "nvme2n2" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd4", 00:12:30.476 "bdev_name": "nvme2n3" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd5", 00:12:30.476 "bdev_name": "nvme3n1" 00:12:30.476 } 00:12:30.476 ]' 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:30.476 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd0", 00:12:30.476 "bdev_name": "nvme0n1" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd1", 00:12:30.476 "bdev_name": "nvme1n1" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd2", 00:12:30.476 "bdev_name": "nvme2n1" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd3", 00:12:30.476 "bdev_name": "nvme2n2" 00:12:30.476 }, 00:12:30.476 { 00:12:30.476 "nbd_device": "/dev/nbd4", 00:12:30.476 "bdev_name": "nvme2n3" 00:12:30.476 }, 00:12:30.477 { 00:12:30.477 "nbd_device": "/dev/nbd5", 00:12:30.477 "bdev_name": "nvme3n1" 00:12:30.477 } 00:12:30.477 ]' 00:12:30.477 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.738 23:25:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.000 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.261 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:31.522 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.523 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.523 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.523 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.784 23:25:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.046 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:32.306 /dev/nbd0 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.306 1+0 records in 00:12:32.306 1+0 records out 00:12:32.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428621 s, 9.6 MB/s 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.306 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:32.567 /dev/nbd1 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.567 1+0 records in 00:12:32.567 1+0 records out 00:12:32.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799001 s, 5.1 MB/s 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.567 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:32.829 /dev/nbd10 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.829 1+0 records in 00:12:32.829 1+0 records out 00:12:32.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059497 s, 6.9 MB/s 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:32.829 23:25:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:33.090 /dev/nbd11 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.090 1+0 records in 00:12:33.090 1+0 records out 00:12:33.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461607 s, 8.9 MB/s 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:33.090 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:33.351 /dev/nbd12 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.351 1+0 records in 00:12:33.351 1+0 records out 00:12:33.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381279 s, 10.7 MB/s 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:33.351 /dev/nbd13 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:12:33.351 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.614 1+0 records in 00:12:33.614 1+0 records out 00:12:33.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656288 s, 6.2 MB/s 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd0", 00:12:33.614 "bdev_name": "nvme0n1" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd1", 00:12:33.614 "bdev_name": "nvme1n1" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd10", 00:12:33.614 "bdev_name": "nvme2n1" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd11", 00:12:33.614 "bdev_name": "nvme2n2" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd12", 00:12:33.614 "bdev_name": "nvme2n3" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd13", 00:12:33.614 "bdev_name": "nvme3n1" 00:12:33.614 } 00:12:33.614 ]' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd0", 00:12:33.614 "bdev_name": "nvme0n1" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd1", 00:12:33.614 "bdev_name": "nvme1n1" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd10", 00:12:33.614 "bdev_name": "nvme2n1" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd11", 00:12:33.614 "bdev_name": "nvme2n2" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd12", 00:12:33.614 "bdev_name": "nvme2n3" 00:12:33.614 }, 00:12:33.614 { 00:12:33.614 "nbd_device": "/dev/nbd13", 00:12:33.614 "bdev_name": "nvme3n1" 00:12:33.614 } 00:12:33.614 ]' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:33.614 /dev/nbd1 00:12:33.614 /dev/nbd10 00:12:33.614 /dev/nbd11 00:12:33.614 /dev/nbd12 00:12:33.614 /dev/nbd13' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:33.614 /dev/nbd1 00:12:33.614 /dev/nbd10 00:12:33.614 /dev/nbd11 00:12:33.614 /dev/nbd12 00:12:33.614 /dev/nbd13' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:33.614 256+0 records in 00:12:33.614 256+0 records out 00:12:33.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00730705 s, 144 MB/s 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.614 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:33.876 256+0 records in 00:12:33.876 256+0 records out 00:12:33.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138499 s, 7.6 MB/s 00:12:33.876 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.876 23:25:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:34.138 256+0 records in 00:12:34.138 256+0 records out 00:12:34.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164564 s, 6.4 MB/s 00:12:34.138 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.138 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:34.138 256+0 records in 00:12:34.138 256+0 records out 00:12:34.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196583 s, 5.3 MB/s 00:12:34.138 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.138 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:34.399 256+0 records in 00:12:34.399 256+0 records out 00:12:34.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150273 s, 7.0 MB/s 00:12:34.399 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.399 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:34.660 256+0 records in 00:12:34.660 256+0 records out 00:12:34.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145622 s, 7.2 MB/s 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:34.660 256+0 records in 00:12:34.660 256+0 records out 00:12:34.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.207755 s, 5.0 MB/s 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.660 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:34.920 23:25:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:34.920 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:34.920 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:34.920 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:34.920 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:34.920 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:34.920 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.181 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.181 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.181 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.182 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.444 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.705 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.965 23:25:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:36.225 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:36.486 malloc_lvol_verify 00:12:36.486 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:36.747 31bc391b-ce63-4aba-8777-76c8a883e777 00:12:36.747 23:25:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:37.008 af5d2e5c-1d58-46bb-99ce-84d10ad8782a 00:12:37.008 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:37.295 /dev/nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:37.295 mke2fs 1.47.0 (5-Feb-2023) 00:12:37.295 Discarding device blocks: 0/4096 done 00:12:37.295 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:37.295 00:12:37.295 Allocating group tables: 0/1 done 00:12:37.295 Writing inode tables: 0/1 done 00:12:37.295 Creating journal (1024 blocks): done 00:12:37.295 Writing superblocks and filesystem accounting information: 0/1 done 00:12:37.295 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81262 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 81262 ']' 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 81262 00:12:37.295 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81262 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.595 killing process with pid 81262 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81262' 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 81262 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 81262 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:37.595 00:12:37.595 real 0m9.437s 00:12:37.595 user 0m13.260s 00:12:37.595 sys 0m3.329s 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.595 23:25:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:37.595 ************************************ 00:12:37.595 END TEST bdev_nbd 00:12:37.595 ************************************ 00:12:37.595 23:25:23 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:37.595 23:25:23 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:37.596 23:25:23 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:37.596 23:25:23 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:37.596 23:25:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.596 23:25:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.596 23:25:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.596 ************************************ 00:12:37.596 START TEST bdev_fio 00:12:37.596 ************************************ 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:37.596 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:37.596 ************************************ 00:12:37.596 START TEST bdev_fio_rw_verify 00:12:37.596 ************************************ 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:37.596 23:25:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:37.864 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.864 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.864 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.864 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.864 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.864 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:37.864 fio-3.35 00:12:37.864 Starting 6 threads 00:12:50.103 00:12:50.103 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81655: Tue Nov 19 23:25:34 2024 00:12:50.103 read: IOPS=12.8k, BW=49.8MiB/s (52.2MB/s)(498MiB/10004msec) 00:12:50.103 slat (usec): min=2, max=3017, avg= 7.04, stdev=22.49 00:12:50.103 clat (usec): min=95, max=9343, avg=1565.61, stdev=878.77 00:12:50.103 lat (usec): min=99, max=9357, avg=1572.65, stdev=879.50 00:12:50.103 clat percentiles (usec): 00:12:50.103 | 50.000th=[ 1467], 99.000th=[ 4293], 99.900th=[ 5538], 99.990th=[ 8160], 00:12:50.103 | 99.999th=[ 9372] 00:12:50.103 write: IOPS=12.9k, BW=50.5MiB/s (52.9MB/s)(505MiB/10004msec); 0 zone resets 00:12:50.103 slat (usec): min=12, max=7782, avg=45.20, stdev=165.43 00:12:50.103 clat (usec): min=82, max=9183, avg=1820.65, stdev=955.79 00:12:50.103 lat (usec): min=96, max=9200, avg=1865.86, stdev=969.91 00:12:50.103 clat percentiles (usec): 00:12:50.103 | 50.000th=[ 1680], 99.000th=[ 4752], 99.900th=[ 6390], 99.990th=[ 7832], 00:12:50.103 | 99.999th=[ 8356] 00:12:50.103 bw ( KiB/s): min=40001, max=77476, per=100.00%, avg=52217.05, stdev=1694.26, samples=114 00:12:50.103 iops : min= 9999, max=19369, avg=13053.53, stdev=423.58, samples=114 00:12:50.103 lat (usec) : 100=0.01%, 250=1.22%, 500=5.63%, 750=7.15%, 1000=9.76% 00:12:50.103 lat (msec) : 2=45.05%, 4=28.95%, 10=2.24% 00:12:50.103 cpu : usr=43.87%, sys=33.39%, ctx=4937, majf=0, minf=13494 00:12:50.103 IO depths : 1=11.3%, 2=23.6%, 4=51.4%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.103 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.103 issued rwts: total=127609,129298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:50.103 00:12:50.103 Run status group 0 (all jobs): 00:12:50.103 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (523MB), run=10004-10004msec 00:12:50.103 WRITE: bw=50.5MiB/s (52.9MB/s), 50.5MiB/s-50.5MiB/s (52.9MB/s-52.9MB/s), io=505MiB (530MB), run=10004-10004msec 00:12:50.103 ----------------------------------------------------- 00:12:50.103 Suppressions used: 00:12:50.103 count bytes template 00:12:50.103 6 48 /usr/src/fio/parse.c 00:12:50.103 1604 153984 /usr/src/fio/iolog.c 00:12:50.103 1 8 libtcmalloc_minimal.so 00:12:50.103 1 904 libcrypto.so 00:12:50.103 ----------------------------------------------------- 00:12:50.103 00:12:50.103 00:12:50.103 real 0m11.088s 00:12:50.103 user 0m27.009s 00:12:50.103 sys 0m20.323s 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:50.103 ************************************ 00:12:50.103 END TEST bdev_fio_rw_verify 00:12:50.103 ************************************ 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:12:50.103 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c9750a1e-973f-4350-9784-01a9012d2ac1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c9750a1e-973f-4350-9784-01a9012d2ac1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c6f22fcc-cb14-4425-bec3-0099b963a0b2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c6f22fcc-cb14-4425-bec3-0099b963a0b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "df31370a-f40d-4d65-8dd8-290fb10c08c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df31370a-f40d-4d65-8dd8-290fb10c08c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "85092311-44b1-4b57-be98-e9b1204d484a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "85092311-44b1-4b57-be98-e9b1204d484a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6f0fa0f6-032b-4e01-9fa5-eec239c0caf9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f0fa0f6-032b-4e01-9fa5-eec239c0caf9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7cab64bd-3ba3-4328-ab90-83ed0f006812"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7cab64bd-3ba3-4328-ab90-83ed0f006812",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:50.104 /home/vagrant/spdk_repo/spdk 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:50.104 00:12:50.104 real 0m11.270s 00:12:50.104 user 0m27.095s 00:12:50.104 sys 0m20.391s 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.104 23:25:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 ************************************ 00:12:50.104 END TEST bdev_fio 00:12:50.104 ************************************ 00:12:50.104 23:25:34 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:50.104 23:25:34 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:50.104 23:25:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:50.104 23:25:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.104 23:25:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.104 ************************************ 00:12:50.104 START TEST bdev_verify 00:12:50.104 ************************************ 00:12:50.104 23:25:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:50.104 [2024-11-19 23:25:35.071510] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:50.104 [2024-11-19 23:25:35.071654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81824 ] 00:12:50.104 [2024-11-19 23:25:35.235007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.104 [2024-11-19 23:25:35.265312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.104 [2024-11-19 23:25:35.265398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.104 Running I/O for 5 seconds... 00:12:51.628 24832.00 IOPS, 97.00 MiB/s [2024-11-19T23:25:38.764Z] 24832.00 IOPS, 97.00 MiB/s [2024-11-19T23:25:40.152Z] 25312.00 IOPS, 98.88 MiB/s [2024-11-19T23:25:40.726Z] 24920.00 IOPS, 97.34 MiB/s [2024-11-19T23:25:40.726Z] 25103.40 IOPS, 98.06 MiB/s 00:12:54.534 Latency(us) 00:12:54.534 [2024-11-19T23:25:40.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.534 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x0 length 0xa0000 00:12:54.534 nvme0n1 : 5.04 1852.58 7.24 0.00 0.00 68946.23 8670.92 73803.62 00:12:54.534 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0xa0000 length 0xa0000 00:12:54.534 nvme0n1 : 5.02 2090.61 8.17 0.00 0.00 61116.39 12653.49 56461.78 00:12:54.534 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x0 length 0xbd0bd 00:12:54.534 nvme1n1 : 5.06 2373.12 9.27 0.00 0.00 53610.74 8872.57 56461.78 00:12:54.534 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:54.534 nvme1n1 : 5.05 2656.84 10.38 0.00 0.00 47963.64 6553.60 61301.37 00:12:54.534 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x0 length 0x80000 00:12:54.534 nvme2n1 : 5.05 1877.31 7.33 0.00 0.00 67583.66 11947.72 66947.54 00:12:54.534 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x80000 length 0x80000 00:12:54.534 nvme2n1 : 5.04 2106.82 8.23 0.00 0.00 60448.12 10737.82 61301.37 00:12:54.534 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x0 length 0x80000 00:12:54.534 nvme2n2 : 5.07 1866.61 7.29 0.00 0.00 67842.29 10586.58 68560.74 00:12:54.534 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x80000 length 0x80000 00:12:54.534 nvme2n2 : 5.06 2099.18 8.20 0.00 0.00 60581.27 7410.61 49605.71 00:12:54.534 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x0 length 0x80000 00:12:54.534 nvme2n3 : 5.06 1846.15 7.21 0.00 0.00 68536.61 10586.58 68560.74 00:12:54.534 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x80000 length 0x80000 00:12:54.534 nvme2n3 : 5.05 2102.02 8.21 0.00 0.00 60422.08 10284.11 54445.29 00:12:54.534 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x0 length 0x20000 00:12:54.534 nvme3n1 : 5.07 1867.75 7.30 0.00 0.00 67602.59 4385.87 64931.05 00:12:54.534 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:54.534 Verification LBA range: start 0x20000 length 0x20000 00:12:54.534 nvme3n1 : 5.07 2096.75 8.19 0.00 0.00 60465.50 5772.21 56461.78 00:12:54.534 [2024-11-19T23:25:40.726Z] =================================================================================================================== 00:12:54.534 [2024-11-19T23:25:40.726Z] Total : 24835.74 97.01 0.00 0.00 61397.09 4385.87 73803.62 00:12:54.796 00:12:54.796 real 0m5.849s 00:12:54.796 user 0m9.313s 00:12:54.796 sys 0m1.475s 00:12:54.796 23:25:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.796 ************************************ 00:12:54.796 END TEST bdev_verify 00:12:54.796 ************************************ 00:12:54.796 23:25:40 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:54.796 23:25:40 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:54.796 23:25:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:12:54.796 23:25:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.796 23:25:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.796 ************************************ 00:12:54.796 START TEST bdev_verify_big_io 00:12:54.796 ************************************ 00:12:54.796 23:25:40 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:55.057 [2024-11-19 23:25:41.000075] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:12:55.057 [2024-11-19 23:25:41.000217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81912 ] 00:12:55.058 [2024-11-19 23:25:41.159889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:55.058 [2024-11-19 23:25:41.190260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.058 [2024-11-19 23:25:41.190341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.319 Running I/O for 5 seconds... 00:13:01.451 2032.00 IOPS, 127.00 MiB/s [2024-11-19T23:25:48.588Z] 3223.00 IOPS, 201.44 MiB/s [2024-11-19T23:25:48.588Z] 3446.33 IOPS, 215.40 MiB/s 00:13:02.396 Latency(us) 00:13:02.396 [2024-11-19T23:25:48.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.396 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.396 Verification LBA range: start 0x0 length 0xa000 00:13:02.396 nvme0n1 : 5.95 86.01 5.38 0.00 0.00 1411685.29 6225.92 1948738.17 00:13:02.396 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.396 Verification LBA range: start 0xa000 length 0xa000 00:13:02.396 nvme0n1 : 5.72 138.37 8.65 0.00 0.00 899671.79 135508.28 1471232.79 00:13:02.396 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.396 Verification LBA range: start 0x0 length 0xbd0b 00:13:02.397 nvme1n1 : 5.89 119.50 7.47 0.00 0.00 949034.03 71787.13 1535760.54 00:13:02.397 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:02.397 nvme1n1 : 5.55 161.43 10.09 0.00 0.00 745273.56 10838.65 822728.86 00:13:02.397 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x0 length 0x8000 00:13:02.397 nvme2n1 : 6.05 74.07 4.63 0.00 0.00 1454346.47 31053.98 3381254.30 00:13:02.397 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x8000 length 0x8000 00:13:02.397 nvme2n1 : 5.73 133.97 8.37 0.00 0.00 888931.77 229073.53 664635.86 00:13:02.397 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x0 length 0x8000 00:13:02.397 nvme2n2 : 6.26 120.07 7.50 0.00 0.00 845697.84 31053.98 2877937.82 00:13:02.397 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x8000 length 0x8000 00:13:02.397 nvme2n2 : 5.74 150.59 9.41 0.00 0.00 769861.67 8822.15 803370.54 00:13:02.397 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x0 length 0x8000 00:13:02.397 nvme2n3 : 6.55 162.51 10.16 0.00 0.00 595360.35 27625.94 3510309.81 00:13:02.397 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x8000 length 0x8000 00:13:02.397 nvme2n3 : 5.74 144.95 9.06 0.00 0.00 777847.91 10687.41 1574477.19 00:13:02.397 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x0 length 0x2000 00:13:02.397 nvme3n1 : 6.76 279.60 17.47 0.00 0.00 333908.97 3112.96 2994087.78 00:13:02.397 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:02.397 Verification LBA range: start 0x2000 length 0x2000 00:13:02.397 nvme3n1 : 5.75 178.15 11.13 0.00 0.00 618469.34 4839.58 871124.68 00:13:02.397 [2024-11-19T23:25:48.589Z] =================================================================================================================== 00:13:02.397 [2024-11-19T23:25:48.589Z] Total : 1749.22 109.33 0.00 0.00 754033.15 3112.96 3510309.81 00:13:02.397 00:13:02.397 real 0m7.556s 00:13:02.397 user 0m13.932s 00:13:02.397 sys 0m0.475s 00:13:02.397 ************************************ 00:13:02.397 END TEST bdev_verify_big_io 00:13:02.397 ************************************ 00:13:02.397 23:25:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.397 23:25:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.397 23:25:48 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:02.397 23:25:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:02.397 23:25:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.397 23:25:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.397 ************************************ 00:13:02.397 START TEST bdev_write_zeroes 00:13:02.397 ************************************ 00:13:02.397 23:25:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:02.658 [2024-11-19 23:25:48.624412] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:02.658 [2024-11-19 23:25:48.624560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82016 ] 00:13:02.658 [2024-11-19 23:25:48.786362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.658 [2024-11-19 23:25:48.809506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.918 Running I/O for 1 seconds... 00:13:04.318 93120.00 IOPS, 363.75 MiB/s 00:13:04.318 Latency(us) 00:13:04.318 [2024-11-19T23:25:50.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.318 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.318 nvme0n1 : 1.03 15132.88 59.11 0.00 0.00 8448.56 6049.48 27222.65 00:13:04.318 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.318 nvme1n1 : 1.03 16104.62 62.91 0.00 0.00 7933.10 5469.74 16837.71 00:13:04.318 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.318 nvme2n1 : 1.03 15110.95 59.03 0.00 0.00 8396.80 5772.21 25508.63 00:13:04.318 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.318 nvme2n2 : 1.03 15094.12 58.96 0.00 0.00 8396.74 5520.15 26819.35 00:13:04.318 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.318 nvme2n3 : 1.04 15077.08 58.89 0.00 0.00 8400.25 5646.18 28029.24 00:13:04.318 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:04.318 nvme3n1 : 1.04 15060.32 58.83 0.00 0.00 8400.00 5646.18 29440.79 00:13:04.318 [2024-11-19T23:25:50.510Z] =================================================================================================================== 00:13:04.318 [2024-11-19T23:25:50.510Z] Total : 91579.97 357.73 0.00 0.00 8325.23 5469.74 29440.79 00:13:04.318 00:13:04.318 real 0m1.756s 00:13:04.318 user 0m1.104s 00:13:04.318 sys 0m0.454s 00:13:04.318 ************************************ 00:13:04.318 END TEST bdev_write_zeroes 00:13:04.318 ************************************ 00:13:04.318 23:25:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.318 23:25:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:04.318 23:25:50 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.318 23:25:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:04.318 23:25:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.318 23:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.318 ************************************ 00:13:04.318 START TEST bdev_json_nonenclosed 00:13:04.318 ************************************ 00:13:04.318 23:25:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.318 [2024-11-19 23:25:50.450195] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:04.318 [2024-11-19 23:25:50.450348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82061 ] 00:13:04.579 [2024-11-19 23:25:50.609821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.579 [2024-11-19 23:25:50.638490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.579 [2024-11-19 23:25:50.638603] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:04.579 [2024-11-19 23:25:50.638620] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:04.579 [2024-11-19 23:25:50.638639] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:04.579 00:13:04.579 real 0m0.340s 00:13:04.579 user 0m0.123s 00:13:04.579 sys 0m0.112s 00:13:04.579 23:25:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.579 ************************************ 00:13:04.579 END TEST bdev_json_nonenclosed 00:13:04.579 ************************************ 00:13:04.579 23:25:50 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:04.840 23:25:50 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.840 23:25:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:04.840 23:25:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.840 23:25:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.840 ************************************ 00:13:04.840 START TEST bdev_json_nonarray 00:13:04.840 ************************************ 00:13:04.840 23:25:50 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:04.840 [2024-11-19 23:25:50.841473] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:04.840 [2024-11-19 23:25:50.841594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82086 ] 00:13:04.840 [2024-11-19 23:25:50.998772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.840 [2024-11-19 23:25:51.026826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.840 [2024-11-19 23:25:51.026947] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:04.840 [2024-11-19 23:25:51.026964] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:04.840 [2024-11-19 23:25:51.026983] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:05.101 00:13:05.101 real 0m0.324s 00:13:05.101 user 0m0.129s 00:13:05.101 sys 0m0.091s 00:13:05.101 ************************************ 00:13:05.101 END TEST bdev_json_nonarray 00:13:05.101 ************************************ 00:13:05.101 23:25:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.101 23:25:51 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:05.101 23:25:51 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:05.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:08.223 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.223 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.794 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:08.794 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.057 00:13:09.057 real 0m50.027s 00:13:09.057 user 1m16.732s 00:13:09.057 sys 0m32.054s 00:13:09.057 ************************************ 00:13:09.057 END TEST blockdev_xnvme 00:13:09.057 23:25:55 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.057 23:25:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.057 ************************************ 00:13:09.057 23:25:55 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:09.057 23:25:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.057 23:25:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.057 23:25:55 -- common/autotest_common.sh@10 -- # set +x 00:13:09.057 ************************************ 00:13:09.057 START TEST ublk 00:13:09.057 ************************************ 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:09.057 * Looking for test storage... 00:13:09.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.057 23:25:55 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.057 23:25:55 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.057 23:25:55 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.057 23:25:55 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.057 23:25:55 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.057 23:25:55 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:09.057 23:25:55 ublk -- scripts/common.sh@345 -- # : 1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.057 23:25:55 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.057 23:25:55 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@353 -- # local d=1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.057 23:25:55 ublk -- scripts/common.sh@355 -- # echo 1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.057 23:25:55 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@353 -- # local d=2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.057 23:25:55 ublk -- scripts/common.sh@355 -- # echo 2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.057 23:25:55 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.057 23:25:55 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.057 23:25:55 ublk -- scripts/common.sh@368 -- # return 0 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.057 --rc genhtml_branch_coverage=1 00:13:09.057 --rc genhtml_function_coverage=1 00:13:09.057 --rc genhtml_legend=1 00:13:09.057 --rc geninfo_all_blocks=1 00:13:09.057 --rc geninfo_unexecuted_blocks=1 00:13:09.057 00:13:09.057 ' 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.057 --rc genhtml_branch_coverage=1 00:13:09.057 --rc genhtml_function_coverage=1 00:13:09.057 --rc genhtml_legend=1 00:13:09.057 --rc geninfo_all_blocks=1 00:13:09.057 --rc geninfo_unexecuted_blocks=1 00:13:09.057 00:13:09.057 ' 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.057 --rc genhtml_branch_coverage=1 00:13:09.057 --rc genhtml_function_coverage=1 00:13:09.057 --rc genhtml_legend=1 00:13:09.057 --rc geninfo_all_blocks=1 00:13:09.057 --rc geninfo_unexecuted_blocks=1 00:13:09.057 00:13:09.057 ' 00:13:09.057 23:25:55 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.057 --rc genhtml_branch_coverage=1 00:13:09.057 --rc genhtml_function_coverage=1 00:13:09.057 --rc genhtml_legend=1 00:13:09.057 --rc geninfo_all_blocks=1 00:13:09.057 --rc geninfo_unexecuted_blocks=1 00:13:09.057 00:13:09.057 ' 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:09.057 23:25:55 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:09.057 23:25:55 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:09.057 23:25:55 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:09.057 23:25:55 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:09.057 23:25:55 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:09.057 23:25:55 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:09.057 23:25:55 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:09.057 23:25:55 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:09.057 23:25:55 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:09.320 23:25:55 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:09.320 23:25:55 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:09.320 23:25:55 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.320 23:25:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:09.320 ************************************ 00:13:09.320 START TEST test_save_ublk_config 00:13:09.320 ************************************ 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:13:09.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82377 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82377 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82377 ']' 00:13:09.320 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.321 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.321 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.321 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.321 23:25:55 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:09.321 [2024-11-19 23:25:55.353623] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:09.321 [2024-11-19 23:25:55.353787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82377 ] 00:13:09.582 [2024-11-19 23:25:55.514211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.582 [2024-11-19 23:25:55.542352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.155 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.155 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:10.155 23:25:56 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:10.155 23:25:56 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:10.155 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.155 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:10.155 [2024-11-19 23:25:56.205763] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:10.155 [2024-11-19 23:25:56.206680] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:10.155 malloc0 00:13:10.155 [2024-11-19 23:25:56.237880] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:10.155 [2024-11-19 23:25:56.237973] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:10.155 [2024-11-19 23:25:56.237984] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:10.155 [2024-11-19 23:25:56.237998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:10.155 [2024-11-19 23:25:56.246863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:10.155 [2024-11-19 23:25:56.246904] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:10.155 [2024-11-19 23:25:56.253772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:10.155 [2024-11-19 23:25:56.253885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:10.155 [2024-11-19 23:25:56.270754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:10.156 0 00:13:10.156 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.156 23:25:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:10.156 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.156 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:10.417 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.417 23:25:56 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:10.417 "subsystems": [ 00:13:10.417 { 00:13:10.418 "subsystem": "fsdev", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "fsdev_set_opts", 00:13:10.418 "params": { 00:13:10.418 "fsdev_io_pool_size": 65535, 00:13:10.418 "fsdev_io_cache_size": 256 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "keyring", 00:13:10.418 "config": [] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "iobuf", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "iobuf_set_options", 00:13:10.418 "params": { 00:13:10.418 "small_pool_count": 8192, 00:13:10.418 "large_pool_count": 1024, 00:13:10.418 "small_bufsize": 8192, 00:13:10.418 "large_bufsize": 135168, 00:13:10.418 "enable_numa": false 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "sock", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "sock_set_default_impl", 00:13:10.418 "params": { 00:13:10.418 "impl_name": "posix" 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "sock_impl_set_options", 00:13:10.418 "params": { 00:13:10.418 "impl_name": "ssl", 00:13:10.418 "recv_buf_size": 4096, 00:13:10.418 "send_buf_size": 4096, 00:13:10.418 "enable_recv_pipe": true, 00:13:10.418 "enable_quickack": false, 00:13:10.418 "enable_placement_id": 0, 00:13:10.418 "enable_zerocopy_send_server": true, 00:13:10.418 "enable_zerocopy_send_client": false, 00:13:10.418 "zerocopy_threshold": 0, 00:13:10.418 "tls_version": 0, 00:13:10.418 "enable_ktls": false 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "sock_impl_set_options", 00:13:10.418 "params": { 00:13:10.418 "impl_name": "posix", 00:13:10.418 "recv_buf_size": 2097152, 00:13:10.418 "send_buf_size": 2097152, 00:13:10.418 "enable_recv_pipe": true, 00:13:10.418 "enable_quickack": false, 00:13:10.418 "enable_placement_id": 0, 00:13:10.418 "enable_zerocopy_send_server": true, 00:13:10.418 "enable_zerocopy_send_client": false, 00:13:10.418 "zerocopy_threshold": 0, 00:13:10.418 "tls_version": 0, 00:13:10.418 "enable_ktls": false 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "vmd", 00:13:10.418 "config": [] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "accel", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "accel_set_options", 00:13:10.418 "params": { 00:13:10.418 "small_cache_size": 128, 00:13:10.418 "large_cache_size": 16, 00:13:10.418 "task_count": 2048, 00:13:10.418 "sequence_count": 2048, 00:13:10.418 "buf_count": 2048 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "bdev", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "bdev_set_options", 00:13:10.418 "params": { 00:13:10.418 "bdev_io_pool_size": 65535, 00:13:10.418 "bdev_io_cache_size": 256, 00:13:10.418 "bdev_auto_examine": true, 00:13:10.418 "iobuf_small_cache_size": 128, 00:13:10.418 "iobuf_large_cache_size": 16 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "bdev_raid_set_options", 00:13:10.418 "params": { 00:13:10.418 "process_window_size_kb": 1024, 00:13:10.418 "process_max_bandwidth_mb_sec": 0 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "bdev_iscsi_set_options", 00:13:10.418 "params": { 00:13:10.418 "timeout_sec": 30 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "bdev_nvme_set_options", 00:13:10.418 "params": { 00:13:10.418 "action_on_timeout": "none", 00:13:10.418 "timeout_us": 0, 00:13:10.418 "timeout_admin_us": 0, 00:13:10.418 "keep_alive_timeout_ms": 10000, 00:13:10.418 "arbitration_burst": 0, 00:13:10.418 "low_priority_weight": 0, 00:13:10.418 "medium_priority_weight": 0, 00:13:10.418 "high_priority_weight": 0, 00:13:10.418 "nvme_adminq_poll_period_us": 10000, 00:13:10.418 "nvme_ioq_poll_period_us": 0, 00:13:10.418 "io_queue_requests": 0, 00:13:10.418 "delay_cmd_submit": true, 00:13:10.418 "transport_retry_count": 4, 00:13:10.418 "bdev_retry_count": 3, 00:13:10.418 "transport_ack_timeout": 0, 00:13:10.418 "ctrlr_loss_timeout_sec": 0, 00:13:10.418 "reconnect_delay_sec": 0, 00:13:10.418 "fast_io_fail_timeout_sec": 0, 00:13:10.418 "disable_auto_failback": false, 00:13:10.418 "generate_uuids": false, 00:13:10.418 "transport_tos": 0, 00:13:10.418 "nvme_error_stat": false, 00:13:10.418 "rdma_srq_size": 0, 00:13:10.418 "io_path_stat": false, 00:13:10.418 "allow_accel_sequence": false, 00:13:10.418 "rdma_max_cq_size": 0, 00:13:10.418 "rdma_cm_event_timeout_ms": 0, 00:13:10.418 "dhchap_digests": [ 00:13:10.418 "sha256", 00:13:10.418 "sha384", 00:13:10.418 "sha512" 00:13:10.418 ], 00:13:10.418 "dhchap_dhgroups": [ 00:13:10.418 "null", 00:13:10.418 "ffdhe2048", 00:13:10.418 "ffdhe3072", 00:13:10.418 "ffdhe4096", 00:13:10.418 "ffdhe6144", 00:13:10.418 "ffdhe8192" 00:13:10.418 ] 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "bdev_nvme_set_hotplug", 00:13:10.418 "params": { 00:13:10.418 "period_us": 100000, 00:13:10.418 "enable": false 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "bdev_malloc_create", 00:13:10.418 "params": { 00:13:10.418 "name": "malloc0", 00:13:10.418 "num_blocks": 8192, 00:13:10.418 "block_size": 4096, 00:13:10.418 "physical_block_size": 4096, 00:13:10.418 "uuid": "5dd4d127-7066-44b0-bf42-6e215f3c5e6c", 00:13:10.418 "optimal_io_boundary": 0, 00:13:10.418 "md_size": 0, 00:13:10.418 "dif_type": 0, 00:13:10.418 "dif_is_head_of_md": false, 00:13:10.418 "dif_pi_format": 0 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "bdev_wait_for_examine" 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "scsi", 00:13:10.418 "config": null 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "scheduler", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "framework_set_scheduler", 00:13:10.418 "params": { 00:13:10.418 "name": "static" 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "vhost_scsi", 00:13:10.418 "config": [] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "vhost_blk", 00:13:10.418 "config": [] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "ublk", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "ublk_create_target", 00:13:10.418 "params": { 00:13:10.418 "cpumask": "1" 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "ublk_start_disk", 00:13:10.418 "params": { 00:13:10.418 "bdev_name": "malloc0", 00:13:10.418 "ublk_id": 0, 00:13:10.418 "num_queues": 1, 00:13:10.418 "queue_depth": 128 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "nbd", 00:13:10.418 "config": [] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "nvmf", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "nvmf_set_config", 00:13:10.418 "params": { 00:13:10.418 "discovery_filter": "match_any", 00:13:10.418 "admin_cmd_passthru": { 00:13:10.418 "identify_ctrlr": false 00:13:10.418 }, 00:13:10.418 "dhchap_digests": [ 00:13:10.418 "sha256", 00:13:10.418 "sha384", 00:13:10.418 "sha512" 00:13:10.418 ], 00:13:10.418 "dhchap_dhgroups": [ 00:13:10.418 "null", 00:13:10.418 "ffdhe2048", 00:13:10.418 "ffdhe3072", 00:13:10.418 "ffdhe4096", 00:13:10.418 "ffdhe6144", 00:13:10.418 "ffdhe8192" 00:13:10.418 ] 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "nvmf_set_max_subsystems", 00:13:10.418 "params": { 00:13:10.418 "max_subsystems": 1024 00:13:10.418 } 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "method": "nvmf_set_crdt", 00:13:10.418 "params": { 00:13:10.418 "crdt1": 0, 00:13:10.418 "crdt2": 0, 00:13:10.418 "crdt3": 0 00:13:10.418 } 00:13:10.418 } 00:13:10.418 ] 00:13:10.418 }, 00:13:10.418 { 00:13:10.418 "subsystem": "iscsi", 00:13:10.418 "config": [ 00:13:10.418 { 00:13:10.418 "method": "iscsi_set_options", 00:13:10.419 "params": { 00:13:10.419 "node_base": "iqn.2016-06.io.spdk", 00:13:10.419 "max_sessions": 128, 00:13:10.419 "max_connections_per_session": 2, 00:13:10.419 "max_queue_depth": 64, 00:13:10.419 "default_time2wait": 2, 00:13:10.419 "default_time2retain": 20, 00:13:10.419 "first_burst_length": 8192, 00:13:10.419 "immediate_data": true, 00:13:10.419 "allow_duplicated_isid": false, 00:13:10.419 "error_recovery_level": 0, 00:13:10.419 "nop_timeout": 60, 00:13:10.419 "nop_in_interval": 30, 00:13:10.419 "disable_chap": false, 00:13:10.419 "require_chap": false, 00:13:10.419 "mutual_chap": false, 00:13:10.419 "chap_group": 0, 00:13:10.419 "max_large_datain_per_connection": 64, 00:13:10.419 "max_r2t_per_connection": 4, 00:13:10.419 "pdu_pool_size": 36864, 00:13:10.419 "immediate_data_pool_size": 16384, 00:13:10.419 "data_out_pool_size": 2048 00:13:10.419 } 00:13:10.419 } 00:13:10.419 ] 00:13:10.419 } 00:13:10.419 ] 00:13:10.419 }' 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82377 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82377 ']' 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82377 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82377 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82377' 00:13:10.419 killing process with pid 82377 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82377 00:13:10.419 23:25:56 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82377 00:13:10.681 [2024-11-19 23:25:56.864185] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:10.942 [2024-11-19 23:25:56.897870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:10.942 [2024-11-19 23:25:56.898019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:10.942 [2024-11-19 23:25:56.908773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:10.942 [2024-11-19 23:25:56.908834] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:10.942 [2024-11-19 23:25:56.908848] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:10.942 [2024-11-19 23:25:56.908883] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:10.942 [2024-11-19 23:25:56.909039] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82415 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82415 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 82415 ']' 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:11.203 23:25:57 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:11.203 "subsystems": [ 00:13:11.203 { 00:13:11.203 "subsystem": "fsdev", 00:13:11.203 "config": [ 00:13:11.203 { 00:13:11.203 "method": "fsdev_set_opts", 00:13:11.203 "params": { 00:13:11.203 "fsdev_io_pool_size": 65535, 00:13:11.203 "fsdev_io_cache_size": 256 00:13:11.203 } 00:13:11.203 } 00:13:11.203 ] 00:13:11.203 }, 00:13:11.203 { 00:13:11.203 "subsystem": "keyring", 00:13:11.203 "config": [] 00:13:11.203 }, 00:13:11.203 { 00:13:11.203 "subsystem": "iobuf", 00:13:11.203 "config": [ 00:13:11.203 { 00:13:11.203 "method": "iobuf_set_options", 00:13:11.203 "params": { 00:13:11.203 "small_pool_count": 8192, 00:13:11.203 "large_pool_count": 1024, 00:13:11.203 "small_bufsize": 8192, 00:13:11.203 "large_bufsize": 135168, 00:13:11.203 "enable_numa": false 00:13:11.203 } 00:13:11.203 } 00:13:11.203 ] 00:13:11.203 }, 00:13:11.203 { 00:13:11.203 "subsystem": "sock", 00:13:11.203 "config": [ 00:13:11.203 { 00:13:11.203 "method": "sock_set_default_impl", 00:13:11.203 "params": { 00:13:11.203 "impl_name": "posix" 00:13:11.203 } 00:13:11.203 }, 00:13:11.203 { 00:13:11.203 "method": "sock_impl_set_options", 00:13:11.203 "params": { 00:13:11.203 "impl_name": "ssl", 00:13:11.203 "recv_buf_size": 4096, 00:13:11.203 "send_buf_size": 4096, 00:13:11.203 "enable_recv_pipe": true, 00:13:11.203 "enable_quickack": false, 00:13:11.203 "enable_placement_id": 0, 00:13:11.203 "enable_zerocopy_send_server": true, 00:13:11.203 "enable_zerocopy_send_client": false, 00:13:11.203 "zerocopy_threshold": 0, 00:13:11.203 "tls_version": 0, 00:13:11.203 "enable_ktls": false 00:13:11.203 } 00:13:11.203 }, 00:13:11.203 { 00:13:11.203 "method": "sock_impl_set_options", 00:13:11.203 "params": { 00:13:11.203 "impl_name": "posix", 00:13:11.203 "recv_buf_size": 2097152, 00:13:11.204 "send_buf_size": 2097152, 00:13:11.204 "enable_recv_pipe": true, 00:13:11.204 "enable_quickack": false, 00:13:11.204 "enable_placement_id": 0, 00:13:11.204 "enable_zerocopy_send_server": true, 00:13:11.204 "enable_zerocopy_send_client": false, 00:13:11.204 "zerocopy_threshold": 0, 00:13:11.204 "tls_version": 0, 00:13:11.204 "enable_ktls": false 00:13:11.204 } 00:13:11.204 } 00:13:11.204 ] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "vmd", 00:13:11.204 "config": [] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "accel", 00:13:11.204 "config": [ 00:13:11.204 { 00:13:11.204 "method": "accel_set_options", 00:13:11.204 "params": { 00:13:11.204 "small_cache_size": 128, 00:13:11.204 "large_cache_size": 16, 00:13:11.204 "task_count": 2048, 00:13:11.204 "sequence_count": 2048, 00:13:11.204 "buf_count": 2048 00:13:11.204 } 00:13:11.204 } 00:13:11.204 ] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "bdev", 00:13:11.204 "config": [ 00:13:11.204 { 00:13:11.204 "method": "bdev_set_options", 00:13:11.204 "params": { 00:13:11.204 "bdev_io_pool_size": 65535, 00:13:11.204 "bdev_io_cache_size": 256, 00:13:11.204 "bdev_auto_examine": true, 00:13:11.204 "iobuf_small_cache_size": 128, 00:13:11.204 "iobuf_large_cache_size": 16 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "bdev_raid_set_options", 00:13:11.204 "params": { 00:13:11.204 "process_window_size_kb": 1024, 00:13:11.204 "process_max_bandwidth_mb_sec": 0 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "bdev_iscsi_set_options", 00:13:11.204 "params": { 00:13:11.204 "timeout_sec": 30 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "bdev_nvme_set_options", 00:13:11.204 "params": { 00:13:11.204 "action_on_timeout": "none", 00:13:11.204 "timeout_us": 0, 00:13:11.204 "timeout_admin_us": 0, 00:13:11.204 "keep_alive_timeout_ms": 10000, 00:13:11.204 "arbitration_burst": 0, 00:13:11.204 "low_priority_weight": 0, 00:13:11.204 "medium_priority_weight": 0, 00:13:11.204 "high_priority_weight": 0, 00:13:11.204 "nvme_adminq_poll_period_us": 10000, 00:13:11.204 "nvme_ioq_poll_period_us": 0, 00:13:11.204 "io_queue_requests": 0, 00:13:11.204 "delay_cmd_submit": true, 00:13:11.204 "transport_retry_count": 4, 00:13:11.204 "bdev_retry_count": 3, 00:13:11.204 "transport_ack_timeout": 0, 00:13:11.204 "ctrlr_loss_timeout_sec": 0, 00:13:11.204 "reconnect_delay_sec": 0, 00:13:11.204 "fast_io_fail_timeout_sec": 0, 00:13:11.204 "disable_auto_failback": false, 00:13:11.204 "generate_uuids": false, 00:13:11.204 "transport_tos": 0, 00:13:11.204 "nvme_error_stat": false, 00:13:11.204 "rdma_srq_size": 0, 00:13:11.204 "io_path_stat": false, 00:13:11.204 "allow_accel_sequence": false, 00:13:11.204 "rdma_max_cq_size": 0, 00:13:11.204 "rdma_cm_event_timeout_ms": 0, 00:13:11.204 "dhchap_digests": [ 00:13:11.204 "sha256", 00:13:11.204 "sha384", 00:13:11.204 "sha512" 00:13:11.204 ], 00:13:11.204 "dhchap_dhgroups": [ 00:13:11.204 "null", 00:13:11.204 "ffdhe2048", 00:13:11.204 "ffdhe3072", 00:13:11.204 "ffdhe4096", 00:13:11.204 "ffdhe6144", 00:13:11.204 "ffdhe8192" 00:13:11.204 ] 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "bdev_nvme_set_hotplug", 00:13:11.204 "params": { 00:13:11.204 "period_us": 100000, 00:13:11.204 "enable": false 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "bdev_malloc_create", 00:13:11.204 "params": { 00:13:11.204 "name": "malloc0", 00:13:11.204 "num_blocks": 8192, 00:13:11.204 "block_size": 4096, 00:13:11.204 "physical_block_size": 4096, 00:13:11.204 "uuid": "5dd4d127-7066-44b0-bf42-6e215f3c5e6c", 00:13:11.204 "optimal_io_boundary": 0, 00:13:11.204 "md_size": 0, 00:13:11.204 "dif_type": 0, 00:13:11.204 "dif_is_head_of_md": false, 00:13:11.204 "dif_pi_format": 0 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "bdev_wait_for_examine" 00:13:11.204 } 00:13:11.204 ] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "scsi", 00:13:11.204 "config": null 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "scheduler", 00:13:11.204 "config": [ 00:13:11.204 { 00:13:11.204 "method": "framework_set_scheduler", 00:13:11.204 "params": { 00:13:11.204 "name": "static" 00:13:11.204 } 00:13:11.204 } 00:13:11.204 ] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "vhost_scsi", 00:13:11.204 "config": [] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "vhost_blk", 00:13:11.204 "config": [] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "ublk", 00:13:11.204 "config": [ 00:13:11.204 { 00:13:11.204 "method": "ublk_create_target", 00:13:11.204 "params": { 00:13:11.204 "cpumask": "1" 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "ublk_start_disk", 00:13:11.204 "params": { 00:13:11.204 "bdev_name": "malloc0", 00:13:11.204 "ublk_id": 0, 00:13:11.204 "num_queues": 1, 00:13:11.204 "queue_depth": 128 00:13:11.204 } 00:13:11.204 } 00:13:11.204 ] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "nbd", 00:13:11.204 "config": [] 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "subsystem": "nvmf", 00:13:11.204 "config": [ 00:13:11.204 { 00:13:11.204 "method": "nvmf_set_config", 00:13:11.204 "params": { 00:13:11.204 "discovery_filter": "match_any", 00:13:11.204 "admin_cmd_passthru": { 00:13:11.204 "identify_ctrlr": false 00:13:11.204 }, 00:13:11.204 "dhchap_digests": [ 00:13:11.204 "sha256", 00:13:11.204 "sha384", 00:13:11.204 "sha512" 00:13:11.204 ], 00:13:11.204 "dhchap_dhgroups": [ 00:13:11.204 "null", 00:13:11.204 "ffdhe2048", 00:13:11.204 "ffdhe3072", 00:13:11.204 "ffdhe4096", 00:13:11.204 "ffdhe6144", 00:13:11.204 "ffdhe8192" 00:13:11.204 ] 00:13:11.204 } 00:13:11.204 }, 00:13:11.204 { 00:13:11.204 "method": "nvmf_set_max_subsystems", 00:13:11.204 "params": { 00:13:11.205 "max_subsystems": 1024 00:13:11.205 } 00:13:11.205 }, 00:13:11.205 { 00:13:11.205 "method": "nvmf_set_crdt", 00:13:11.205 "params": { 00:13:11.205 "crdt1": 0, 00:13:11.205 "crdt2": 0, 00:13:11.205 "crdt3": 0 00:13:11.205 } 00:13:11.205 } 00:13:11.205 ] 00:13:11.205 }, 00:13:11.205 { 00:13:11.205 "subsystem": "iscsi", 00:13:11.205 "config": [ 00:13:11.205 { 00:13:11.205 "method": "iscsi_set_options", 00:13:11.205 "params": { 00:13:11.205 "node_base": "iqn.2016-06.io.spdk", 00:13:11.205 "max_sessions": 128, 00:13:11.205 "max_connections_per_session": 2, 00:13:11.205 "max_queue_depth": 64, 00:13:11.205 "default_time2wait": 2, 00:13:11.205 "default_time2retain": 20, 00:13:11.205 "first_burst_length": 8192, 00:13:11.205 "immediate_data": true, 00:13:11.205 "allow_duplicated_isid": false, 00:13:11.205 "error_recovery_level": 0, 00:13:11.205 "nop_timeout": 60, 00:13:11.205 "nop_in_interval": 30, 00:13:11.205 "disable_chap": false, 00:13:11.205 "require_chap": false, 00:13:11.205 "mutual_chap": false, 00:13:11.205 "chap_group": 0, 00:13:11.205 "max_large_datain_per_connection": 64, 00:13:11.205 "max_r2t_per_connection": 4, 00:13:11.205 "pdu_pool_size": 36864, 00:13:11.205 "immediate_data_pool_size": 16384, 00:13:11.205 "data_out_pool_size": 2048 00:13:11.205 } 00:13:11.205 } 00:13:11.205 ] 00:13:11.205 } 00:13:11.205 ] 00:13:11.205 }' 00:13:11.205 23:25:57 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:11.465 [2024-11-19 23:25:57.424778] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:11.465 [2024-11-19 23:25:57.424913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82415 ] 00:13:11.465 [2024-11-19 23:25:57.586723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.465 [2024-11-19 23:25:57.614842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.038 [2024-11-19 23:25:57.999756] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:12.038 [2024-11-19 23:25:58.000151] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:12.038 [2024-11-19 23:25:58.007893] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:12.038 [2024-11-19 23:25:58.007998] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:12.038 [2024-11-19 23:25:58.008007] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:12.038 [2024-11-19 23:25:58.008018] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:12.038 [2024-11-19 23:25:58.016849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:12.038 [2024-11-19 23:25:58.016886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:12.038 [2024-11-19 23:25:58.023765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:12.038 [2024-11-19 23:25:58.023876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:12.038 [2024-11-19 23:25:58.040771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82415 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 82415 ']' 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 82415 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82415 00:13:12.299 killing process with pid 82415 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82415' 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 82415 00:13:12.299 23:25:58 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 82415 00:13:12.561 [2024-11-19 23:25:58.617993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:12.561 [2024-11-19 23:25:58.651861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:12.561 [2024-11-19 23:25:58.652029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:12.561 [2024-11-19 23:25:58.662752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:12.561 [2024-11-19 23:25:58.662836] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:12.561 [2024-11-19 23:25:58.662846] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:12.561 [2024-11-19 23:25:58.662882] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:12.561 [2024-11-19 23:25:58.663034] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:13.136 23:25:59 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:13.136 ************************************ 00:13:13.136 END TEST test_save_ublk_config 00:13:13.136 ************************************ 00:13:13.136 00:13:13.136 real 0m3.823s 00:13:13.136 user 0m2.611s 00:13:13.136 sys 0m1.876s 00:13:13.136 23:25:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.136 23:25:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:13.136 23:25:59 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82466 00:13:13.136 23:25:59 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:13.136 23:25:59 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82466 00:13:13.136 23:25:59 ublk -- common/autotest_common.sh@835 -- # '[' -z 82466 ']' 00:13:13.136 23:25:59 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.136 23:25:59 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.136 23:25:59 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:13.136 23:25:59 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.136 23:25:59 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.136 23:25:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:13.136 [2024-11-19 23:25:59.233006] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:13.136 [2024-11-19 23:25:59.233151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82466 ] 00:13:13.399 [2024-11-19 23:25:59.394807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:13.399 [2024-11-19 23:25:59.424801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.399 [2024-11-19 23:25:59.424807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.973 23:26:00 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.973 23:26:00 ublk -- common/autotest_common.sh@868 -- # return 0 00:13:13.973 23:26:00 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:13.973 23:26:00 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:13.973 23:26:00 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.973 23:26:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:13.973 ************************************ 00:13:13.973 START TEST test_create_ublk 00:13:13.973 ************************************ 00:13:13.973 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:13:13.973 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:13.973 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.973 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:13.973 [2024-11-19 23:26:00.110757] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:13.973 [2024-11-19 23:26:00.112432] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:13.973 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.973 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:13.973 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:13.973 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.973 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 [2024-11-19 23:26:00.202920] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:14.236 [2024-11-19 23:26:00.203387] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:14.236 [2024-11-19 23:26:00.203407] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:14.236 [2024-11-19 23:26:00.203418] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:14.236 [2024-11-19 23:26:00.210789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:14.236 [2024-11-19 23:26:00.210830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:14.236 [2024-11-19 23:26:00.218776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:14.236 [2024-11-19 23:26:00.219490] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:14.236 [2024-11-19 23:26:00.249832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:14.236 23:26:00 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:14.236 { 00:13:14.236 "ublk_device": "/dev/ublkb0", 00:13:14.236 "id": 0, 00:13:14.236 "queue_depth": 512, 00:13:14.236 "num_queues": 4, 00:13:14.236 "bdev_name": "Malloc0" 00:13:14.236 } 00:13:14.236 ]' 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:14.236 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:14.498 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:14.498 23:26:00 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:14.498 23:26:00 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:14.498 fio: verification read phase will never start because write phase uses all of runtime 00:13:14.498 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:14.498 fio-3.35 00:13:14.498 Starting 1 process 00:13:26.709 00:13:26.709 fio_test: (groupid=0, jobs=1): err= 0: pid=82510: Tue Nov 19 23:26:10 2024 00:13:26.709 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(540MiB/10001msec); 0 zone resets 00:13:26.709 clat (usec): min=34, max=10120, avg=71.52, stdev=126.50 00:13:26.709 lat (usec): min=34, max=10127, avg=71.98, stdev=126.58 00:13:26.709 clat percentiles (usec): 00:13:26.709 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 59], 00:13:26.709 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:13:26.709 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 74], 95.00th=[ 93], 00:13:26.709 | 99.00th=[ 161], 99.50th=[ 269], 99.90th=[ 2573], 99.95th=[ 3458], 00:13:26.709 | 99.99th=[ 4113] 00:13:26.709 bw ( KiB/s): min= 8512, max=61616, per=99.65%, avg=55094.32, stdev=14195.82, samples=19 00:13:26.709 iops : min= 2128, max=15404, avg=13773.58, stdev=3548.95, samples=19 00:13:26.709 lat (usec) : 50=0.02%, 100=95.21%, 250=4.11%, 500=0.47%, 750=0.01% 00:13:26.709 lat (usec) : 1000=0.01% 00:13:26.709 lat (msec) : 2=0.04%, 4=0.11%, 10=0.02%, 20=0.01% 00:13:26.709 cpu : usr=2.29%, sys=9.85%, ctx=138239, majf=0, minf=796 00:13:26.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.709 issued rwts: total=0,138239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.709 00:13:26.709 Run status group 0 (all jobs): 00:13:26.709 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=540MiB (566MB), run=10001-10001msec 00:13:26.709 00:13:26.709 Disk stats (read/write): 00:13:26.709 ublkb0: ios=0/136699, merge=0/0, ticks=0/8647, in_queue=8647, util=99.09% 00:13:26.710 23:26:10 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 [2024-11-19 23:26:10.688386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.710 [2024-11-19 23:26:10.736258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.710 [2024-11-19 23:26:10.737249] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.710 [2024-11-19 23:26:10.743759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.710 [2024-11-19 23:26:10.744076] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:26.710 [2024-11-19 23:26:10.744141] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 [2024-11-19 23:26:10.759823] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:26.710 request: 00:13:26.710 { 00:13:26.710 "ublk_id": 0, 00:13:26.710 "method": "ublk_stop_disk", 00:13:26.710 "req_id": 1 00:13:26.710 } 00:13:26.710 Got JSON-RPC error response 00:13:26.710 response: 00:13:26.710 { 00:13:26.710 "code": -19, 00:13:26.710 "message": "No such device" 00:13:26.710 } 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:26.710 23:26:10 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 [2024-11-19 23:26:10.775805] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:26.710 [2024-11-19 23:26:10.776944] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:26.710 [2024-11-19 23:26:10.776969] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:26.710 ************************************ 00:13:26.710 END TEST test_create_ublk 00:13:26.710 ************************************ 00:13:26.710 23:26:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:26.710 00:13:26.710 real 0m10.836s 00:13:26.710 user 0m0.512s 00:13:26.710 sys 0m1.082s 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 23:26:10 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:26.710 23:26:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.710 23:26:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.710 23:26:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 ************************************ 00:13:26.710 START TEST test_create_multi_ublk 00:13:26.710 ************************************ 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 [2024-11-19 23:26:10.986747] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:26.710 [2024-11-19 23:26:10.987610] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.710 23:26:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.710 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 [2024-11-19 23:26:11.058849] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:26.711 [2024-11-19 23:26:11.059140] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:26.711 [2024-11-19 23:26:11.059153] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:26.711 [2024-11-19 23:26:11.059159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:26.711 [2024-11-19 23:26:11.070774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:26.711 [2024-11-19 23:26:11.070792] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:26.711 [2024-11-19 23:26:11.082760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:26.711 [2024-11-19 23:26:11.083234] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:26.711 [2024-11-19 23:26:11.123769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 [2024-11-19 23:26:11.207847] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:26.711 [2024-11-19 23:26:11.208146] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:26.711 [2024-11-19 23:26:11.208153] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:26.711 [2024-11-19 23:26:11.208159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:26.711 [2024-11-19 23:26:11.219784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:26.711 [2024-11-19 23:26:11.219802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:26.711 [2024-11-19 23:26:11.231753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:26.711 [2024-11-19 23:26:11.232244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:26.711 [2024-11-19 23:26:11.267757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 [2024-11-19 23:26:11.351845] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:26.711 [2024-11-19 23:26:11.352140] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:26.711 [2024-11-19 23:26:11.352153] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:26.711 [2024-11-19 23:26:11.352158] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:26.711 [2024-11-19 23:26:11.363773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:26.711 [2024-11-19 23:26:11.363790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:26.711 [2024-11-19 23:26:11.375762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:26.711 [2024-11-19 23:26:11.376244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:26.711 [2024-11-19 23:26:11.395753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.711 [2024-11-19 23:26:11.478835] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:26.711 [2024-11-19 23:26:11.479127] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:26.711 [2024-11-19 23:26:11.479137] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:26.711 [2024-11-19 23:26:11.479143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:26.711 [2024-11-19 23:26:11.490769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:26.711 [2024-11-19 23:26:11.490789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:26.711 [2024-11-19 23:26:11.502760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:26.711 [2024-11-19 23:26:11.503237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:26.711 [2024-11-19 23:26:11.515787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:26.711 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:26.712 { 00:13:26.712 "ublk_device": "/dev/ublkb0", 00:13:26.712 "id": 0, 00:13:26.712 "queue_depth": 512, 00:13:26.712 "num_queues": 4, 00:13:26.712 "bdev_name": "Malloc0" 00:13:26.712 }, 00:13:26.712 { 00:13:26.712 "ublk_device": "/dev/ublkb1", 00:13:26.712 "id": 1, 00:13:26.712 "queue_depth": 512, 00:13:26.712 "num_queues": 4, 00:13:26.712 "bdev_name": "Malloc1" 00:13:26.712 }, 00:13:26.712 { 00:13:26.712 "ublk_device": "/dev/ublkb2", 00:13:26.712 "id": 2, 00:13:26.712 "queue_depth": 512, 00:13:26.712 "num_queues": 4, 00:13:26.712 "bdev_name": "Malloc2" 00:13:26.712 }, 00:13:26.712 { 00:13:26.712 "ublk_device": "/dev/ublkb3", 00:13:26.712 "id": 3, 00:13:26.712 "queue_depth": 512, 00:13:26.712 "num_queues": 4, 00:13:26.712 "bdev_name": "Malloc3" 00:13:26.712 } 00:13:26.712 ]' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:26.712 23:26:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.712 [2024-11-19 23:26:12.202831] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.712 [2024-11-19 23:26:12.252284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.712 [2024-11-19 23:26:12.253398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.712 [2024-11-19 23:26:12.258763] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.712 [2024-11-19 23:26:12.259036] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:26.712 [2024-11-19 23:26:12.259047] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.712 [2024-11-19 23:26:12.272818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.712 [2024-11-19 23:26:12.307273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.712 [2024-11-19 23:26:12.308356] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.712 [2024-11-19 23:26:12.319781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.712 [2024-11-19 23:26:12.320035] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:26.712 [2024-11-19 23:26:12.320042] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.712 [2024-11-19 23:26:12.334816] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.712 [2024-11-19 23:26:12.384276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.712 [2024-11-19 23:26:12.385233] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.712 [2024-11-19 23:26:12.390752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.712 [2024-11-19 23:26:12.390989] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:26.712 [2024-11-19 23:26:12.391000] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.712 [2024-11-19 23:26:12.406807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:26.712 [2024-11-19 23:26:12.445776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:26.712 [2024-11-19 23:26:12.446437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:26.712 [2024-11-19 23:26:12.450817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:26.712 [2024-11-19 23:26:12.451146] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:26.712 [2024-11-19 23:26:12.451158] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:26.712 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:26.713 [2024-11-19 23:26:12.642818] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:26.713 [2024-11-19 23:26:12.643982] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:26.713 [2024-11-19 23:26:12.644015] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.713 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:26.972 ************************************ 00:13:26.972 END TEST test_create_multi_ublk 00:13:26.972 ************************************ 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:26.972 00:13:26.972 real 0m2.009s 00:13:26.972 user 0m0.798s 00:13:26.972 sys 0m0.167s 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.972 23:26:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:26.972 23:26:13 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:26.972 23:26:13 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:26.972 23:26:13 ublk -- ublk/ublk.sh@130 -- # killprocess 82466 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@954 -- # '[' -z 82466 ']' 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@958 -- # kill -0 82466 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@959 -- # uname 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82466 00:13:26.972 killing process with pid 82466 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82466' 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@973 -- # kill 82466 00:13:26.972 23:26:13 ublk -- common/autotest_common.sh@978 -- # wait 82466 00:13:27.232 [2024-11-19 23:26:13.198149] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:27.233 [2024-11-19 23:26:13.198202] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:27.526 00:13:27.526 real 0m18.386s 00:13:27.526 user 0m28.516s 00:13:27.526 sys 0m7.233s 00:13:27.526 23:26:13 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.526 ************************************ 00:13:27.526 END TEST ublk 00:13:27.526 ************************************ 00:13:27.526 23:26:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:27.526 23:26:13 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:27.526 23:26:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.526 23:26:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.526 23:26:13 -- common/autotest_common.sh@10 -- # set +x 00:13:27.526 ************************************ 00:13:27.526 START TEST ublk_recovery 00:13:27.526 ************************************ 00:13:27.526 23:26:13 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:27.526 * Looking for test storage... 00:13:27.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:27.526 23:26:13 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:27.526 23:26:13 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:27.526 23:26:13 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:27.526 23:26:13 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.526 23:26:13 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.527 23:26:13 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:27.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.527 --rc genhtml_branch_coverage=1 00:13:27.527 --rc genhtml_function_coverage=1 00:13:27.527 --rc genhtml_legend=1 00:13:27.527 --rc geninfo_all_blocks=1 00:13:27.527 --rc geninfo_unexecuted_blocks=1 00:13:27.527 00:13:27.527 ' 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:27.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.527 --rc genhtml_branch_coverage=1 00:13:27.527 --rc genhtml_function_coverage=1 00:13:27.527 --rc genhtml_legend=1 00:13:27.527 --rc geninfo_all_blocks=1 00:13:27.527 --rc geninfo_unexecuted_blocks=1 00:13:27.527 00:13:27.527 ' 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:27.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.527 --rc genhtml_branch_coverage=1 00:13:27.527 --rc genhtml_function_coverage=1 00:13:27.527 --rc genhtml_legend=1 00:13:27.527 --rc geninfo_all_blocks=1 00:13:27.527 --rc geninfo_unexecuted_blocks=1 00:13:27.527 00:13:27.527 ' 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:27.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.527 --rc genhtml_branch_coverage=1 00:13:27.527 --rc genhtml_function_coverage=1 00:13:27.527 --rc genhtml_legend=1 00:13:27.527 --rc geninfo_all_blocks=1 00:13:27.527 --rc geninfo_unexecuted_blocks=1 00:13:27.527 00:13:27.527 ' 00:13:27.527 23:26:13 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:27.527 23:26:13 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:27.527 23:26:13 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:27.527 23:26:13 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=82835 00:13:27.527 23:26:13 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:27.527 23:26:13 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 82835 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82835 ']' 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.527 23:26:13 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:27.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.527 23:26:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.805 [2024-11-19 23:26:13.729629] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:27.805 [2024-11-19 23:26:13.729739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82835 ] 00:13:27.805 [2024-11-19 23:26:13.878152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:27.805 [2024-11-19 23:26:13.895579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.805 [2024-11-19 23:26:13.895661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.376 23:26:14 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.376 23:26:14 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:13:28.376 23:26:14 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:28.376 23:26:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.376 23:26:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.635 [2024-11-19 23:26:14.570746] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:28.635 [2024-11-19 23:26:14.571659] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:28.635 23:26:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.635 23:26:14 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:28.635 23:26:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.635 23:26:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.635 malloc0 00:13:28.635 23:26:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.635 23:26:14 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:28.635 23:26:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.635 23:26:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:28.635 [2024-11-19 23:26:14.602853] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:28.635 [2024-11-19 23:26:14.602944] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:28.635 [2024-11-19 23:26:14.602950] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:28.635 [2024-11-19 23:26:14.602963] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:28.635 [2024-11-19 23:26:14.611815] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:28.635 [2024-11-19 23:26:14.611843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:28.635 [2024-11-19 23:26:14.618753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:28.635 [2024-11-19 23:26:14.618863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:28.636 [2024-11-19 23:26:14.641758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:28.636 1 00:13:28.636 23:26:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.636 23:26:14 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:29.570 23:26:15 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=82868 00:13:29.570 23:26:15 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:29.570 23:26:15 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:29.571 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:29.571 fio-3.35 00:13:29.571 Starting 1 process 00:13:34.836 23:26:20 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 82835 00:13:34.836 23:26:20 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:40.126 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 82835 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:40.126 23:26:25 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=82979 00:13:40.126 23:26:25 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.126 23:26:25 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:40.126 23:26:25 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 82979 00:13:40.126 23:26:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 82979 ']' 00:13:40.126 23:26:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.126 23:26:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.126 23:26:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.126 23:26:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.126 23:26:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:40.126 [2024-11-19 23:26:25.738207] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:13:40.126 [2024-11-19 23:26:25.738754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82979 ] 00:13:40.126 [2024-11-19 23:26:25.890455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.126 [2024-11-19 23:26:25.907584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.126 [2024-11-19 23:26:25.907683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:13:40.385 23:26:26 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 [2024-11-19 23:26:26.527745] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:40.385 [2024-11-19 23:26:26.528665] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.385 23:26:26 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 malloc0 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.385 23:26:26 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:40.385 [2024-11-19 23:26:26.559849] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:40.385 [2024-11-19 23:26:26.559896] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:40.385 [2024-11-19 23:26:26.559903] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:40.385 [2024-11-19 23:26:26.567783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:40.385 [2024-11-19 23:26:26.567798] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:40.385 1 00:13:40.385 23:26:26 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.385 23:26:26 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 82868 00:13:41.759 [2024-11-19 23:26:27.567829] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:41.759 [2024-11-19 23:26:27.575753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:41.759 [2024-11-19 23:26:27.575767] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:42.694 [2024-11-19 23:26:28.575790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:42.695 [2024-11-19 23:26:28.579758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:42.695 [2024-11-19 23:26:28.579771] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:43.629 [2024-11-19 23:26:29.579794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:43.629 [2024-11-19 23:26:29.583756] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:43.629 [2024-11-19 23:26:29.583770] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:43.629 [2024-11-19 23:26:29.583776] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:43.629 [2024-11-19 23:26:29.583841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:05.556 [2024-11-19 23:26:50.969757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:05.556 [2024-11-19 23:26:50.976361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:05.556 [2024-11-19 23:26:50.983951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:05.556 [2024-11-19 23:26:50.983971] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:32.111 00:14:32.111 fio_test: (groupid=0, jobs=1): err= 0: pid=82871: Tue Nov 19 23:27:15 2024 00:14:32.111 read: IOPS=15.3k, BW=59.9MiB/s (62.8MB/s)(3595MiB/60001msec) 00:14:32.111 slat (nsec): min=1063, max=484880, avg=4863.12, stdev=1600.60 00:14:32.111 clat (usec): min=1044, max=30338k, avg=4296.15, stdev=262660.38 00:14:32.111 lat (usec): min=1053, max=30338k, avg=4301.01, stdev=262660.38 00:14:32.111 clat percentiles (usec): 00:14:32.111 | 1.00th=[ 1696], 5.00th=[ 1811], 10.00th=[ 1844], 20.00th=[ 1860], 00:14:32.111 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[ 1909], 60.00th=[ 1909], 00:14:32.111 | 70.00th=[ 1926], 80.00th=[ 1942], 90.00th=[ 1991], 95.00th=[ 2835], 00:14:32.111 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 7046], 99.95th=[ 8455], 00:14:32.111 | 99.99th=[13042] 00:14:32.111 bw ( KiB/s): min=48128, max=128576, per=100.00%, avg=122806.51, stdev=14519.03, samples=59 00:14:32.111 iops : min=12032, max=32144, avg=30701.63, stdev=3629.76, samples=59 00:14:32.111 write: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(3589MiB/60001msec); 0 zone resets 00:14:32.111 slat (nsec): min=1087, max=356986, avg=4885.98, stdev=1286.22 00:14:32.111 clat (usec): min=890, max=30338k, avg=4045.10, stdev=243069.56 00:14:32.111 lat (usec): min=893, max=30338k, avg=4049.99, stdev=243069.56 00:14:32.111 clat percentiles (usec): 00:14:32.111 | 1.00th=[ 1729], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942], 00:14:32.111 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:14:32.111 | 70.00th=[ 2024], 80.00th=[ 2040], 90.00th=[ 2073], 95.00th=[ 2737], 00:14:32.111 | 99.00th=[ 5014], 99.50th=[ 5473], 99.90th=[ 7111], 99.95th=[ 8586], 00:14:32.111 | 99.99th=[13042] 00:14:32.111 bw ( KiB/s): min=47584, max=128344, per=100.00%, avg=122642.03, stdev=14694.42, samples=59 00:14:32.111 iops : min=11896, max=32086, avg=30660.51, stdev=3673.61, samples=59 00:14:32.111 lat (usec) : 1000=0.01% 00:14:32.111 lat (msec) : 2=73.90%, 4=23.61%, 10=2.45%, 20=0.03%, >=2000=0.01% 00:14:32.111 cpu : usr=3.30%, sys=15.39%, ctx=61284, majf=0, minf=13 00:14:32.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:32.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:32.111 issued rwts: total=920236,918818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:32.111 00:14:32.111 Run status group 0 (all jobs): 00:14:32.111 READ: bw=59.9MiB/s (62.8MB/s), 59.9MiB/s-59.9MiB/s (62.8MB/s-62.8MB/s), io=3595MiB (3769MB), run=60001-60001msec 00:14:32.111 WRITE: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=3589MiB (3763MB), run=60001-60001msec 00:14:32.111 00:14:32.111 Disk stats (read/write): 00:14:32.111 ublkb1: ios=916812/915386, merge=0/0, ticks=3900592/3589494, in_queue=7490087, util=99.90% 00:14:32.111 23:27:15 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 [2024-11-19 23:27:15.903937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.111 [2024-11-19 23:27:15.958753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.111 [2024-11-19 23:27:15.958941] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.111 [2024-11-19 23:27:15.966772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.111 [2024-11-19 23:27:15.966857] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:32.111 [2024-11-19 23:27:15.966867] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.111 23:27:15 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 [2024-11-19 23:27:15.982823] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:32.111 [2024-11-19 23:27:15.984030] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.111 [2024-11-19 23:27:15.984056] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.111 23:27:15 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:32.111 23:27:15 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:32.111 23:27:15 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 82979 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 82979 ']' 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 82979 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.111 23:27:15 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82979 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.111 killing process with pid 82979 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82979' 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@973 -- # kill 82979 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@978 -- # wait 82979 00:14:32.111 [2024-11-19 23:27:16.181224] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:32.111 [2024-11-19 23:27:16.181267] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.111 00:14:32.111 real 1m2.930s 00:14:32.111 user 1m44.915s 00:14:32.111 sys 0m21.684s 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:32.111 ************************************ 00:14:32.111 END TEST ublk_recovery 00:14:32.111 ************************************ 00:14:32.111 23:27:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 23:27:16 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:14:32.111 23:27:16 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@260 -- # timing_exit lib 00:14:32.111 23:27:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.111 23:27:16 -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 23:27:16 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:14:32.111 23:27:16 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:32.111 23:27:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:32.111 23:27:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.111 23:27:16 -- common/autotest_common.sh@10 -- # set +x 00:14:32.111 ************************************ 00:14:32.111 START TEST ftl 00:14:32.111 ************************************ 00:14:32.111 23:27:16 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:32.111 * Looking for test storage... 00:14:32.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:32.111 23:27:16 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:32.111 23:27:16 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:14:32.111 23:27:16 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:32.111 23:27:16 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:32.111 23:27:16 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:32.111 23:27:16 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:32.111 23:27:16 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:32.112 23:27:16 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.112 23:27:16 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:32.112 23:27:16 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:32.112 23:27:16 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:32.112 23:27:16 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:32.112 23:27:16 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:32.112 23:27:16 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:32.112 23:27:16 ftl -- scripts/common.sh@345 -- # : 1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:32.112 23:27:16 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.112 23:27:16 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@353 -- # local d=1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.112 23:27:16 ftl -- scripts/common.sh@355 -- # echo 1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:32.112 23:27:16 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:32.112 23:27:16 ftl -- scripts/common.sh@353 -- # local d=2 00:14:32.112 23:27:16 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.112 23:27:16 ftl -- scripts/common.sh@355 -- # echo 2 00:14:32.112 23:27:16 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:32.112 23:27:16 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:32.112 23:27:16 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:32.112 23:27:16 ftl -- scripts/common.sh@368 -- # return 0 00:14:32.112 23:27:16 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.112 23:27:16 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.112 --rc genhtml_branch_coverage=1 00:14:32.112 --rc genhtml_function_coverage=1 00:14:32.112 --rc genhtml_legend=1 00:14:32.112 --rc geninfo_all_blocks=1 00:14:32.112 --rc geninfo_unexecuted_blocks=1 00:14:32.112 00:14:32.112 ' 00:14:32.112 23:27:16 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.112 --rc genhtml_branch_coverage=1 00:14:32.112 --rc genhtml_function_coverage=1 00:14:32.112 --rc genhtml_legend=1 00:14:32.112 --rc geninfo_all_blocks=1 00:14:32.112 --rc geninfo_unexecuted_blocks=1 00:14:32.112 00:14:32.112 ' 00:14:32.112 23:27:16 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.112 --rc genhtml_branch_coverage=1 00:14:32.112 --rc genhtml_function_coverage=1 00:14:32.112 --rc genhtml_legend=1 00:14:32.112 --rc geninfo_all_blocks=1 00:14:32.112 --rc geninfo_unexecuted_blocks=1 00:14:32.112 00:14:32.112 ' 00:14:32.112 23:27:16 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:32.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.112 --rc genhtml_branch_coverage=1 00:14:32.112 --rc genhtml_function_coverage=1 00:14:32.112 --rc genhtml_legend=1 00:14:32.112 --rc geninfo_all_blocks=1 00:14:32.112 --rc geninfo_unexecuted_blocks=1 00:14:32.112 00:14:32.112 ' 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:32.112 23:27:16 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:32.112 23:27:16 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:32.112 23:27:16 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:32.112 23:27:16 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:32.112 23:27:16 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:32.112 23:27:16 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.112 23:27:16 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:32.112 23:27:16 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:32.112 23:27:16 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.112 23:27:16 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.112 23:27:16 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:32.112 23:27:16 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:32.112 23:27:16 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:32.112 23:27:16 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:32.112 23:27:16 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:32.112 23:27:16 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:32.112 23:27:16 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.112 23:27:16 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:32.112 23:27:16 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:32.112 23:27:16 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:32.112 23:27:16 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:32.112 23:27:16 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:32.112 23:27:16 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:32.112 23:27:16 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:32.112 23:27:16 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:32.112 23:27:16 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:32.112 23:27:16 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.112 23:27:16 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:32.112 23:27:16 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:32.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:32.112 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:32.112 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:32.112 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:32.112 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:32.112 23:27:17 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=83774 00:14:32.112 23:27:17 ftl -- ftl/ftl.sh@38 -- # waitforlisten 83774 00:14:32.112 23:27:17 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:32.112 23:27:17 ftl -- common/autotest_common.sh@835 -- # '[' -z 83774 ']' 00:14:32.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.112 23:27:17 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.112 23:27:17 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.112 23:27:17 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.112 23:27:17 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.112 23:27:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:32.112 [2024-11-19 23:27:17.225722] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:14:32.112 [2024-11-19 23:27:17.225859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83774 ] 00:14:32.112 [2024-11-19 23:27:17.383969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.112 [2024-11-19 23:27:17.412468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.112 23:27:18 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.112 23:27:18 ftl -- common/autotest_common.sh@868 -- # return 0 00:14:32.112 23:27:18 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:32.112 23:27:18 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:32.686 23:27:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:32.686 23:27:18 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@50 -- # break 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:33.261 23:27:19 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:33.526 23:27:19 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:33.526 23:27:19 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:33.526 23:27:19 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:33.526 23:27:19 ftl -- ftl/ftl.sh@63 -- # break 00:14:33.526 23:27:19 ftl -- ftl/ftl.sh@66 -- # killprocess 83774 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@954 -- # '[' -z 83774 ']' 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@958 -- # kill -0 83774 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@959 -- # uname 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83774 00:14:33.526 killing process with pid 83774 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83774' 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@973 -- # kill 83774 00:14:33.526 23:27:19 ftl -- common/autotest_common.sh@978 -- # wait 83774 00:14:33.787 23:27:19 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:33.787 23:27:19 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:33.787 23:27:19 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.787 23:27:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.787 23:27:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:33.787 ************************************ 00:14:33.787 START TEST ftl_fio_basic 00:14:33.787 ************************************ 00:14:33.787 23:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:33.787 * Looking for test storage... 00:14:33.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:33.787 23:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:33.787 23:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:14:33.787 23:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.048 23:27:19 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.048 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.048 --rc genhtml_branch_coverage=1 00:14:34.048 --rc genhtml_function_coverage=1 00:14:34.048 --rc genhtml_legend=1 00:14:34.048 --rc geninfo_all_blocks=1 00:14:34.048 --rc geninfo_unexecuted_blocks=1 00:14:34.048 00:14:34.048 ' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:34.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.049 --rc genhtml_branch_coverage=1 00:14:34.049 --rc genhtml_function_coverage=1 00:14:34.049 --rc genhtml_legend=1 00:14:34.049 --rc geninfo_all_blocks=1 00:14:34.049 --rc geninfo_unexecuted_blocks=1 00:14:34.049 00:14:34.049 ' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=83890 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 83890 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 83890 ']' 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.049 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:34.049 [2024-11-19 23:27:20.148624] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:14:34.049 [2024-11-19 23:27:20.149837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83890 ] 00:14:34.308 [2024-11-19 23:27:20.310355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:34.308 [2024-11-19 23:27:20.342901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.308 [2024-11-19 23:27:20.345813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.308 [2024-11-19 23:27:20.345831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:34.879 23:27:20 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:35.140 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:35.399 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:35.399 { 00:14:35.399 "name": "nvme0n1", 00:14:35.399 "aliases": [ 00:14:35.399 "010895a7-be8f-4ea8-ac5b-dfd916db191e" 00:14:35.399 ], 00:14:35.399 "product_name": "NVMe disk", 00:14:35.399 "block_size": 4096, 00:14:35.399 "num_blocks": 1310720, 00:14:35.399 "uuid": "010895a7-be8f-4ea8-ac5b-dfd916db191e", 00:14:35.399 "numa_id": -1, 00:14:35.399 "assigned_rate_limits": { 00:14:35.399 "rw_ios_per_sec": 0, 00:14:35.399 "rw_mbytes_per_sec": 0, 00:14:35.399 "r_mbytes_per_sec": 0, 00:14:35.399 "w_mbytes_per_sec": 0 00:14:35.399 }, 00:14:35.399 "claimed": false, 00:14:35.399 "zoned": false, 00:14:35.399 "supported_io_types": { 00:14:35.399 "read": true, 00:14:35.399 "write": true, 00:14:35.399 "unmap": true, 00:14:35.399 "flush": true, 00:14:35.399 "reset": true, 00:14:35.399 "nvme_admin": true, 00:14:35.399 "nvme_io": true, 00:14:35.399 "nvme_io_md": false, 00:14:35.399 "write_zeroes": true, 00:14:35.399 "zcopy": false, 00:14:35.399 "get_zone_info": false, 00:14:35.399 "zone_management": false, 00:14:35.399 "zone_append": false, 00:14:35.399 "compare": true, 00:14:35.399 "compare_and_write": false, 00:14:35.399 "abort": true, 00:14:35.399 "seek_hole": false, 00:14:35.399 "seek_data": false, 00:14:35.399 "copy": true, 00:14:35.399 "nvme_iov_md": false 00:14:35.399 }, 00:14:35.399 "driver_specific": { 00:14:35.399 "nvme": [ 00:14:35.399 { 00:14:35.399 "pci_address": "0000:00:11.0", 00:14:35.399 "trid": { 00:14:35.399 "trtype": "PCIe", 00:14:35.399 "traddr": "0000:00:11.0" 00:14:35.399 }, 00:14:35.399 "ctrlr_data": { 00:14:35.399 "cntlid": 0, 00:14:35.399 "vendor_id": "0x1b36", 00:14:35.399 "model_number": "QEMU NVMe Ctrl", 00:14:35.399 "serial_number": "12341", 00:14:35.399 "firmware_revision": "8.0.0", 00:14:35.399 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:35.399 "oacs": { 00:14:35.399 "security": 0, 00:14:35.399 "format": 1, 00:14:35.399 "firmware": 0, 00:14:35.399 "ns_manage": 1 00:14:35.399 }, 00:14:35.399 "multi_ctrlr": false, 00:14:35.400 "ana_reporting": false 00:14:35.400 }, 00:14:35.400 "vs": { 00:14:35.400 "nvme_version": "1.4" 00:14:35.400 }, 00:14:35.400 "ns_data": { 00:14:35.400 "id": 1, 00:14:35.400 "can_share": false 00:14:35.400 } 00:14:35.400 } 00:14:35.400 ], 00:14:35.400 "mp_policy": "active_passive" 00:14:35.400 } 00:14:35.400 } 00:14:35.400 ]' 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:35.400 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:35.658 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:35.658 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:35.917 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e34916f3-8243-4a87-8730-d770f1fd342d 00:14:35.917 23:27:21 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e34916f3-8243-4a87-8730-d770f1fd342d 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=573796c1-c74a-4283-9512-dbec09cc1c19 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=573796c1-c74a-4283-9512-dbec09cc1c19 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=573796c1-c74a-4283-9512-dbec09cc1c19 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:35.917 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:36.176 { 00:14:36.176 "name": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:36.176 "aliases": [ 00:14:36.176 "lvs/nvme0n1p0" 00:14:36.176 ], 00:14:36.176 "product_name": "Logical Volume", 00:14:36.176 "block_size": 4096, 00:14:36.176 "num_blocks": 26476544, 00:14:36.176 "uuid": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:36.176 "assigned_rate_limits": { 00:14:36.176 "rw_ios_per_sec": 0, 00:14:36.176 "rw_mbytes_per_sec": 0, 00:14:36.176 "r_mbytes_per_sec": 0, 00:14:36.176 "w_mbytes_per_sec": 0 00:14:36.176 }, 00:14:36.176 "claimed": false, 00:14:36.176 "zoned": false, 00:14:36.176 "supported_io_types": { 00:14:36.176 "read": true, 00:14:36.176 "write": true, 00:14:36.176 "unmap": true, 00:14:36.176 "flush": false, 00:14:36.176 "reset": true, 00:14:36.176 "nvme_admin": false, 00:14:36.176 "nvme_io": false, 00:14:36.176 "nvme_io_md": false, 00:14:36.176 "write_zeroes": true, 00:14:36.176 "zcopy": false, 00:14:36.176 "get_zone_info": false, 00:14:36.176 "zone_management": false, 00:14:36.176 "zone_append": false, 00:14:36.176 "compare": false, 00:14:36.176 "compare_and_write": false, 00:14:36.176 "abort": false, 00:14:36.176 "seek_hole": true, 00:14:36.176 "seek_data": true, 00:14:36.176 "copy": false, 00:14:36.176 "nvme_iov_md": false 00:14:36.176 }, 00:14:36.176 "driver_specific": { 00:14:36.176 "lvol": { 00:14:36.176 "lvol_store_uuid": "e34916f3-8243-4a87-8730-d770f1fd342d", 00:14:36.176 "base_bdev": "nvme0n1", 00:14:36.176 "thin_provision": true, 00:14:36.176 "num_allocated_clusters": 0, 00:14:36.176 "snapshot": false, 00:14:36.176 "clone": false, 00:14:36.176 "esnap_clone": false 00:14:36.176 } 00:14:36.176 } 00:14:36.176 } 00:14:36.176 ]' 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:36.176 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:36.434 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:36.434 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:36.434 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=573796c1-c74a-4283-9512-dbec09cc1c19 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:36.697 { 00:14:36.697 "name": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:36.697 "aliases": [ 00:14:36.697 "lvs/nvme0n1p0" 00:14:36.697 ], 00:14:36.697 "product_name": "Logical Volume", 00:14:36.697 "block_size": 4096, 00:14:36.697 "num_blocks": 26476544, 00:14:36.697 "uuid": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:36.697 "assigned_rate_limits": { 00:14:36.697 "rw_ios_per_sec": 0, 00:14:36.697 "rw_mbytes_per_sec": 0, 00:14:36.697 "r_mbytes_per_sec": 0, 00:14:36.697 "w_mbytes_per_sec": 0 00:14:36.697 }, 00:14:36.697 "claimed": false, 00:14:36.697 "zoned": false, 00:14:36.697 "supported_io_types": { 00:14:36.697 "read": true, 00:14:36.697 "write": true, 00:14:36.697 "unmap": true, 00:14:36.697 "flush": false, 00:14:36.697 "reset": true, 00:14:36.697 "nvme_admin": false, 00:14:36.697 "nvme_io": false, 00:14:36.697 "nvme_io_md": false, 00:14:36.697 "write_zeroes": true, 00:14:36.697 "zcopy": false, 00:14:36.697 "get_zone_info": false, 00:14:36.697 "zone_management": false, 00:14:36.697 "zone_append": false, 00:14:36.697 "compare": false, 00:14:36.697 "compare_and_write": false, 00:14:36.697 "abort": false, 00:14:36.697 "seek_hole": true, 00:14:36.697 "seek_data": true, 00:14:36.697 "copy": false, 00:14:36.697 "nvme_iov_md": false 00:14:36.697 }, 00:14:36.697 "driver_specific": { 00:14:36.697 "lvol": { 00:14:36.697 "lvol_store_uuid": "e34916f3-8243-4a87-8730-d770f1fd342d", 00:14:36.697 "base_bdev": "nvme0n1", 00:14:36.697 "thin_provision": true, 00:14:36.697 "num_allocated_clusters": 0, 00:14:36.697 "snapshot": false, 00:14:36.697 "clone": false, 00:14:36.697 "esnap_clone": false 00:14:36.697 } 00:14:36.697 } 00:14:36.697 } 00:14:36.697 ]' 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:36.697 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:36.955 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:36.955 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:36.955 23:27:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:36.955 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:36.955 23:27:22 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:36.955 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:36.955 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:36.955 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:36.955 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:36.955 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:36.956 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=573796c1-c74a-4283-9512-dbec09cc1c19 00:14:36.956 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:14:36.956 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:14:36.956 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:14:36.956 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 573796c1-c74a-4283-9512-dbec09cc1c19 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:14:37.214 { 00:14:37.214 "name": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:37.214 "aliases": [ 00:14:37.214 "lvs/nvme0n1p0" 00:14:37.214 ], 00:14:37.214 "product_name": "Logical Volume", 00:14:37.214 "block_size": 4096, 00:14:37.214 "num_blocks": 26476544, 00:14:37.214 "uuid": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:37.214 "assigned_rate_limits": { 00:14:37.214 "rw_ios_per_sec": 0, 00:14:37.214 "rw_mbytes_per_sec": 0, 00:14:37.214 "r_mbytes_per_sec": 0, 00:14:37.214 "w_mbytes_per_sec": 0 00:14:37.214 }, 00:14:37.214 "claimed": false, 00:14:37.214 "zoned": false, 00:14:37.214 "supported_io_types": { 00:14:37.214 "read": true, 00:14:37.214 "write": true, 00:14:37.214 "unmap": true, 00:14:37.214 "flush": false, 00:14:37.214 "reset": true, 00:14:37.214 "nvme_admin": false, 00:14:37.214 "nvme_io": false, 00:14:37.214 "nvme_io_md": false, 00:14:37.214 "write_zeroes": true, 00:14:37.214 "zcopy": false, 00:14:37.214 "get_zone_info": false, 00:14:37.214 "zone_management": false, 00:14:37.214 "zone_append": false, 00:14:37.214 "compare": false, 00:14:37.214 "compare_and_write": false, 00:14:37.214 "abort": false, 00:14:37.214 "seek_hole": true, 00:14:37.214 "seek_data": true, 00:14:37.214 "copy": false, 00:14:37.214 "nvme_iov_md": false 00:14:37.214 }, 00:14:37.214 "driver_specific": { 00:14:37.214 "lvol": { 00:14:37.214 "lvol_store_uuid": "e34916f3-8243-4a87-8730-d770f1fd342d", 00:14:37.214 "base_bdev": "nvme0n1", 00:14:37.214 "thin_provision": true, 00:14:37.214 "num_allocated_clusters": 0, 00:14:37.214 "snapshot": false, 00:14:37.214 "clone": false, 00:14:37.214 "esnap_clone": false 00:14:37.214 } 00:14:37.214 } 00:14:37.214 } 00:14:37.214 ]' 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:37.214 23:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 573796c1-c74a-4283-9512-dbec09cc1c19 -c nvc0n1p0 --l2p_dram_limit 60 00:14:37.473 [2024-11-19 23:27:23.559474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.473 [2024-11-19 23:27:23.559613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:37.473 [2024-11-19 23:27:23.559628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:37.473 [2024-11-19 23:27:23.559645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.473 [2024-11-19 23:27:23.559716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.473 [2024-11-19 23:27:23.559726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:37.473 [2024-11-19 23:27:23.559744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:14:37.473 [2024-11-19 23:27:23.559755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.473 [2024-11-19 23:27:23.559786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:37.473 [2024-11-19 23:27:23.560001] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:37.473 [2024-11-19 23:27:23.560013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.473 [2024-11-19 23:27:23.560020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:37.473 [2024-11-19 23:27:23.560028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:14:37.473 [2024-11-19 23:27:23.560035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.473 [2024-11-19 23:27:23.560090] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 14db239c-2203-4f7d-8907-e1443789ced8 00:14:37.473 [2024-11-19 23:27:23.561101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.473 [2024-11-19 23:27:23.561116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:37.473 [2024-11-19 23:27:23.561125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:14:37.473 [2024-11-19 23:27:23.561132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.473 [2024-11-19 23:27:23.566264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.473 [2024-11-19 23:27:23.566366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:37.473 [2024-11-19 23:27:23.566381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.063 ms 00:14:37.473 [2024-11-19 23:27:23.566390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.473 [2024-11-19 23:27:23.566480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.473 [2024-11-19 23:27:23.566488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:37.473 [2024-11-19 23:27:23.566499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:14:37.474 [2024-11-19 23:27:23.566504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.474 [2024-11-19 23:27:23.566549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.474 [2024-11-19 23:27:23.566557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:37.474 [2024-11-19 23:27:23.566565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:14:37.474 [2024-11-19 23:27:23.566571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.474 [2024-11-19 23:27:23.566597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:37.474 [2024-11-19 23:27:23.567896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.474 [2024-11-19 23:27:23.567922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:37.474 [2024-11-19 23:27:23.567930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:14:37.474 [2024-11-19 23:27:23.567938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.474 [2024-11-19 23:27:23.567976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.474 [2024-11-19 23:27:23.567999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:37.474 [2024-11-19 23:27:23.568006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:37.474 [2024-11-19 23:27:23.568015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.474 [2024-11-19 23:27:23.568039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:37.474 [2024-11-19 23:27:23.568151] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:37.474 [2024-11-19 23:27:23.568161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:37.474 [2024-11-19 23:27:23.568179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:37.474 [2024-11-19 23:27:23.568188] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568198] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:37.474 [2024-11-19 23:27:23.568221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:37.474 [2024-11-19 23:27:23.568235] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:37.474 [2024-11-19 23:27:23.568242] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:37.474 [2024-11-19 23:27:23.568249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.474 [2024-11-19 23:27:23.568256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:37.474 [2024-11-19 23:27:23.568263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:14:37.474 [2024-11-19 23:27:23.568271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.474 [2024-11-19 23:27:23.568351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.474 [2024-11-19 23:27:23.568369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:37.474 [2024-11-19 23:27:23.568377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:14:37.474 [2024-11-19 23:27:23.568384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.474 [2024-11-19 23:27:23.568481] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:37.474 [2024-11-19 23:27:23.568490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:37.474 [2024-11-19 23:27:23.568504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:37.474 [2024-11-19 23:27:23.568535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:37.474 [2024-11-19 23:27:23.568552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:37.474 [2024-11-19 23:27:23.568565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:37.474 [2024-11-19 23:27:23.568572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:37.474 [2024-11-19 23:27:23.568579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:37.474 [2024-11-19 23:27:23.568588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:37.474 [2024-11-19 23:27:23.568593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:37.474 [2024-11-19 23:27:23.568599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:37.474 [2024-11-19 23:27:23.568620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:37.474 [2024-11-19 23:27:23.568636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:37.474 [2024-11-19 23:27:23.568655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:37.474 [2024-11-19 23:27:23.568671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:37.474 [2024-11-19 23:27:23.568691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:37.474 [2024-11-19 23:27:23.568703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:37.474 [2024-11-19 23:27:23.568708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:37.474 [2024-11-19 23:27:23.568714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:37.474 [2024-11-19 23:27:23.568719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:37.474 [2024-11-19 23:27:23.568725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:37.474 [2024-11-19 23:27:23.568739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:37.474 [2024-11-19 23:27:23.568746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:37.474 [2024-11-19 23:27:23.568751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:37.474 [2024-11-19 23:27:23.568758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.475 [2024-11-19 23:27:23.568763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:37.475 [2024-11-19 23:27:23.568770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:37.475 [2024-11-19 23:27:23.568775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.475 [2024-11-19 23:27:23.568780] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:37.475 [2024-11-19 23:27:23.568788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:37.475 [2024-11-19 23:27:23.568796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:37.475 [2024-11-19 23:27:23.568811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:37.475 [2024-11-19 23:27:23.568818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:37.475 [2024-11-19 23:27:23.568823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:37.475 [2024-11-19 23:27:23.568830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:37.475 [2024-11-19 23:27:23.568835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:37.475 [2024-11-19 23:27:23.568842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:37.475 [2024-11-19 23:27:23.568847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:37.475 [2024-11-19 23:27:23.568856] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:37.475 [2024-11-19 23:27:23.568864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:37.475 [2024-11-19 23:27:23.568885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:37.475 [2024-11-19 23:27:23.568892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:37.475 [2024-11-19 23:27:23.568898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:37.475 [2024-11-19 23:27:23.568904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:37.475 [2024-11-19 23:27:23.568909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:37.475 [2024-11-19 23:27:23.568917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:37.475 [2024-11-19 23:27:23.568923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:37.475 [2024-11-19 23:27:23.568930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:37.475 [2024-11-19 23:27:23.568935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:37.475 [2024-11-19 23:27:23.568965] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:37.475 [2024-11-19 23:27:23.568971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:37.475 [2024-11-19 23:27:23.568983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:37.475 [2024-11-19 23:27:23.568990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:37.475 [2024-11-19 23:27:23.568996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:37.475 [2024-11-19 23:27:23.569003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:37.475 [2024-11-19 23:27:23.569010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:37.475 [2024-11-19 23:27:23.569019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:14:37.475 [2024-11-19 23:27:23.569025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:37.475 [2024-11-19 23:27:23.569089] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:37.475 [2024-11-19 23:27:23.569106] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:40.808 [2024-11-19 23:27:26.881892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.882072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:40.808 [2024-11-19 23:27:26.882095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3312.785 ms 00:14:40.808 [2024-11-19 23:27:26.882105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.890644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.890679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:40.808 [2024-11-19 23:27:26.890695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.401 ms 00:14:40.808 [2024-11-19 23:27:26.890703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.890819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.890830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:40.808 [2024-11-19 23:27:26.890841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:14:40.808 [2024-11-19 23:27:26.890849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.913499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.913554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:40.808 [2024-11-19 23:27:26.913580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.585 ms 00:14:40.808 [2024-11-19 23:27:26.913609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.913666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.913681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:40.808 [2024-11-19 23:27:26.913697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:40.808 [2024-11-19 23:27:26.913708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.914150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.914184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:40.808 [2024-11-19 23:27:26.914203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:14:40.808 [2024-11-19 23:27:26.914219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.914412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.914434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:40.808 [2024-11-19 23:27:26.914451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:14:40.808 [2024-11-19 23:27:26.914464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.921140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.808 [2024-11-19 23:27:26.921173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:40.808 [2024-11-19 23:27:26.921186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:14:40.808 [2024-11-19 23:27:26.921195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:40.808 [2024-11-19 23:27:26.930288] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:40.809 [2024-11-19 23:27:26.944977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:40.809 [2024-11-19 23:27:26.945009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:40.809 [2024-11-19 23:27:26.945020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.701 ms 00:14:40.809 [2024-11-19 23:27:26.945029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.000119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.000165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:41.067 [2024-11-19 23:27:27.000178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.054 ms 00:14:41.067 [2024-11-19 23:27:27.000190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.000369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.000391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:41.067 [2024-11-19 23:27:27.000400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:14:41.067 [2024-11-19 23:27:27.000410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.003329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.003493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:41.067 [2024-11-19 23:27:27.003509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.892 ms 00:14:41.067 [2024-11-19 23:27:27.003520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.006411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.006443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:41.067 [2024-11-19 23:27:27.006453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:14:41.067 [2024-11-19 23:27:27.006463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.006777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.006800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:41.067 [2024-11-19 23:27:27.006820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:14:41.067 [2024-11-19 23:27:27.006831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.035103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.035140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:41.067 [2024-11-19 23:27:27.035150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.244 ms 00:14:41.067 [2024-11-19 23:27:27.035160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.039137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.039171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:41.067 [2024-11-19 23:27:27.039181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:14:41.067 [2024-11-19 23:27:27.039191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.067 [2024-11-19 23:27:27.042425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.067 [2024-11-19 23:27:27.042460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:41.068 [2024-11-19 23:27:27.042469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:14:41.068 [2024-11-19 23:27:27.042479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.068 [2024-11-19 23:27:27.045953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.068 [2024-11-19 23:27:27.045987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:41.068 [2024-11-19 23:27:27.045996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.425 ms 00:14:41.068 [2024-11-19 23:27:27.046007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.068 [2024-11-19 23:27:27.046050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.068 [2024-11-19 23:27:27.046061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:41.068 [2024-11-19 23:27:27.046070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:41.068 [2024-11-19 23:27:27.046079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.068 [2024-11-19 23:27:27.046151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.068 [2024-11-19 23:27:27.046162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:41.068 [2024-11-19 23:27:27.046172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:14:41.068 [2024-11-19 23:27:27.046182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.068 [2024-11-19 23:27:27.047093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3487.168 ms, result 0 00:14:41.068 { 00:14:41.068 "name": "ftl0", 00:14:41.068 "uuid": "14db239c-2203-4f7d-8907-e1443789ced8" 00:14:41.068 } 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.068 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:41.329 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:41.329 [ 00:14:41.329 { 00:14:41.329 "name": "ftl0", 00:14:41.329 "aliases": [ 00:14:41.329 "14db239c-2203-4f7d-8907-e1443789ced8" 00:14:41.329 ], 00:14:41.329 "product_name": "FTL disk", 00:14:41.329 "block_size": 4096, 00:14:41.329 "num_blocks": 20971520, 00:14:41.329 "uuid": "14db239c-2203-4f7d-8907-e1443789ced8", 00:14:41.329 "assigned_rate_limits": { 00:14:41.329 "rw_ios_per_sec": 0, 00:14:41.329 "rw_mbytes_per_sec": 0, 00:14:41.329 "r_mbytes_per_sec": 0, 00:14:41.329 "w_mbytes_per_sec": 0 00:14:41.329 }, 00:14:41.329 "claimed": false, 00:14:41.329 "zoned": false, 00:14:41.329 "supported_io_types": { 00:14:41.329 "read": true, 00:14:41.329 "write": true, 00:14:41.329 "unmap": true, 00:14:41.329 "flush": true, 00:14:41.329 "reset": false, 00:14:41.329 "nvme_admin": false, 00:14:41.329 "nvme_io": false, 00:14:41.329 "nvme_io_md": false, 00:14:41.329 "write_zeroes": true, 00:14:41.329 "zcopy": false, 00:14:41.329 "get_zone_info": false, 00:14:41.329 "zone_management": false, 00:14:41.329 "zone_append": false, 00:14:41.329 "compare": false, 00:14:41.329 "compare_and_write": false, 00:14:41.329 "abort": false, 00:14:41.329 "seek_hole": false, 00:14:41.329 "seek_data": false, 00:14:41.329 "copy": false, 00:14:41.329 "nvme_iov_md": false 00:14:41.329 }, 00:14:41.329 "driver_specific": { 00:14:41.329 "ftl": { 00:14:41.329 "base_bdev": "573796c1-c74a-4283-9512-dbec09cc1c19", 00:14:41.329 "cache": "nvc0n1p0" 00:14:41.329 } 00:14:41.329 } 00:14:41.329 } 00:14:41.329 ] 00:14:41.329 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:14:41.329 23:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:41.329 23:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:41.596 23:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:41.596 23:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:41.856 [2024-11-19 23:27:27.849620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.849787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:41.856 [2024-11-19 23:27:27.849808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:14:41.856 [2024-11-19 23:27:27.849827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.849870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:41.856 [2024-11-19 23:27:27.850335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.850367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:41.856 [2024-11-19 23:27:27.850379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:14:41.856 [2024-11-19 23:27:27.850388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.850888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.850925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:41.856 [2024-11-19 23:27:27.850934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:14:41.856 [2024-11-19 23:27:27.850943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.854184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.854207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:41.856 [2024-11-19 23:27:27.854216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:14:41.856 [2024-11-19 23:27:27.854226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.860438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.860472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:41.856 [2024-11-19 23:27:27.860482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:14:41.856 [2024-11-19 23:27:27.860492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.862096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.862135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:41.856 [2024-11-19 23:27:27.862144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.509 ms 00:14:41.856 [2024-11-19 23:27:27.862153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.866427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.866466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:41.856 [2024-11-19 23:27:27.866477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:14:41.856 [2024-11-19 23:27:27.866487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.866656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.866668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:41.856 [2024-11-19 23:27:27.866676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:14:41.856 [2024-11-19 23:27:27.866685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.868754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.868786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:41.856 [2024-11-19 23:27:27.868794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.044 ms 00:14:41.856 [2024-11-19 23:27:27.868803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.870395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.870430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:41.856 [2024-11-19 23:27:27.870439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:14:41.856 [2024-11-19 23:27:27.870447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.871656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.871691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:41.856 [2024-11-19 23:27:27.871700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:14:41.856 [2024-11-19 23:27:27.871709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.872722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.856 [2024-11-19 23:27:27.872768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:41.856 [2024-11-19 23:27:27.872777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:14:41.856 [2024-11-19 23:27:27.872786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.856 [2024-11-19 23:27:27.872829] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:41.856 [2024-11-19 23:27:27.872847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.872997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:41.856 [2024-11-19 23:27:27.873147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:41.857 [2024-11-19 23:27:27.873725] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:41.857 [2024-11-19 23:27:27.873744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 14db239c-2203-4f7d-8907-e1443789ced8 00:14:41.857 [2024-11-19 23:27:27.873766] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:41.857 [2024-11-19 23:27:27.873773] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:41.857 [2024-11-19 23:27:27.873782] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:41.857 [2024-11-19 23:27:27.873789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:41.857 [2024-11-19 23:27:27.873798] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:41.857 [2024-11-19 23:27:27.873805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:41.857 [2024-11-19 23:27:27.873814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:41.857 [2024-11-19 23:27:27.873820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:41.857 [2024-11-19 23:27:27.873829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:41.857 [2024-11-19 23:27:27.873836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.857 [2024-11-19 23:27:27.873845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:41.857 [2024-11-19 23:27:27.873853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:14:41.857 [2024-11-19 23:27:27.873863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.857 [2024-11-19 23:27:27.875415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.857 [2024-11-19 23:27:27.875442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:41.857 [2024-11-19 23:27:27.875452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:14:41.857 [2024-11-19 23:27:27.875471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.857 [2024-11-19 23:27:27.875571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:41.857 [2024-11-19 23:27:27.875582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:41.857 [2024-11-19 23:27:27.875590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:14:41.857 [2024-11-19 23:27:27.875602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.857 [2024-11-19 23:27:27.881060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.857 [2024-11-19 23:27:27.881094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:41.857 [2024-11-19 23:27:27.881104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.857 [2024-11-19 23:27:27.881113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.857 [2024-11-19 23:27:27.881177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.857 [2024-11-19 23:27:27.881187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:41.858 [2024-11-19 23:27:27.881195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.881205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.881285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.881299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:41.858 [2024-11-19 23:27:27.881307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.881316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.881347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.881357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:41.858 [2024-11-19 23:27:27.881364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.881373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.890937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.891126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:41.858 [2024-11-19 23:27:27.891141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.891162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.898994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:41.858 [2024-11-19 23:27:27.899039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:41.858 [2024-11-19 23:27:27.899178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:41.858 [2024-11-19 23:27:27.899268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:41.858 [2024-11-19 23:27:27.899380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:41.858 [2024-11-19 23:27:27.899453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:41.858 [2024-11-19 23:27:27.899537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:41.858 [2024-11-19 23:27:27.899621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:41.858 [2024-11-19 23:27:27.899630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:41.858 [2024-11-19 23:27:27.899641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:41.858 [2024-11-19 23:27:27.899830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.177 ms, result 0 00:14:41.858 true 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 83890 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 83890 ']' 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 83890 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83890 00:14:41.858 killing process with pid 83890 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83890' 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 83890 00:14:41.858 23:27:27 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 83890 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:47.123 23:27:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:47.123 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:47.123 fio-3.35 00:14:47.123 Starting 1 thread 00:14:51.307 00:14:51.307 test: (groupid=0, jobs=1): err= 0: pid=84064: Tue Nov 19 23:27:37 2024 00:14:51.307 read: IOPS=916, BW=60.8MiB/s (63.8MB/s)(255MiB/4184msec) 00:14:51.307 slat (nsec): min=3947, max=24184, avg=5399.02, stdev=1738.70 00:14:51.307 clat (usec): min=243, max=1708, avg=493.76, stdev=190.34 00:14:51.307 lat (usec): min=248, max=1723, avg=499.16, stdev=190.48 00:14:51.307 clat percentiles (usec): 00:14:51.307 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 322], 00:14:51.307 | 30.00th=[ 334], 40.00th=[ 388], 50.00th=[ 433], 60.00th=[ 490], 00:14:51.307 | 70.00th=[ 562], 80.00th=[ 717], 90.00th=[ 807], 95.00th=[ 840], 00:14:51.307 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 1156], 99.95th=[ 1254], 00:14:51.307 | 99.99th=[ 1713] 00:14:51.307 write: IOPS=923, BW=61.3MiB/s (64.3MB/s)(256MiB/4177msec); 0 zone resets 00:14:51.308 slat (nsec): min=14647, max=78990, avg=18950.21, stdev=3086.71 00:14:51.308 clat (usec): min=268, max=1868, avg=557.24, stdev=216.00 00:14:51.308 lat (usec): min=292, max=1896, avg=576.19, stdev=216.03 00:14:51.308 clat percentiles (usec): 00:14:51.308 | 1.00th=[ 306], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 351], 00:14:51.308 | 30.00th=[ 371], 40.00th=[ 445], 50.00th=[ 490], 60.00th=[ 562], 00:14:51.308 | 70.00th=[ 668], 80.00th=[ 807], 90.00th=[ 881], 95.00th=[ 906], 00:14:51.308 | 99.00th=[ 1057], 99.50th=[ 1156], 99.90th=[ 1582], 99.95th=[ 1762], 00:14:51.308 | 99.99th=[ 1876] 00:14:51.308 bw ( KiB/s): min=43520, max=81056, per=100.00%, avg=63665.00, stdev=15350.44, samples=8 00:14:51.308 iops : min= 640, max= 1192, avg=936.25, stdev=225.74, samples=8 00:14:51.308 lat (usec) : 250=0.01%, 500=57.51%, 750=21.24%, 1000=20.34% 00:14:51.308 lat (msec) : 2=0.90% 00:14:51.308 cpu : usr=99.35%, sys=0.02%, ctx=12, majf=0, minf=1181 00:14:51.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:51.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.308 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:51.308 00:14:51.308 Run status group 0 (all jobs): 00:14:51.308 READ: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=255MiB (267MB), run=4184-4184msec 00:14:51.308 WRITE: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=256MiB (269MB), run=4177-4177msec 00:14:52.249 ----------------------------------------------------- 00:14:52.250 Suppressions used: 00:14:52.250 count bytes template 00:14:52.250 1 5 /usr/src/fio/parse.c 00:14:52.250 1 8 libtcmalloc_minimal.so 00:14:52.250 1 904 libcrypto.so 00:14:52.250 ----------------------------------------------------- 00:14:52.250 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.250 23:27:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:52.250 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:52.250 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:52.250 fio-3.35 00:14:52.250 Starting 2 threads 00:15:18.823 00:15:18.823 first_half: (groupid=0, jobs=1): err= 0: pid=84160: Tue Nov 19 23:28:01 2024 00:15:18.823 read: IOPS=2949, BW=11.5MiB/s (12.1MB/s)(256MiB/22196msec) 00:15:18.823 slat (nsec): min=3046, max=34136, avg=5403.60, stdev=1437.22 00:15:18.823 clat (usec): min=1674, max=362217, avg=36831.77, stdev=22874.80 00:15:18.823 lat (usec): min=1679, max=362226, avg=36837.17, stdev=22874.95 00:15:18.823 clat percentiles (msec): 00:15:18.823 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:15:18.823 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 33], 00:15:18.823 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 70], 00:15:18.823 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 300], 99.95th=[ 334], 00:15:18.823 | 99.99th=[ 359] 00:15:18.823 write: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(256MiB/22142msec); 0 zone resets 00:15:18.823 slat (usec): min=3, max=1540, avg= 6.79, stdev= 8.15 00:15:18.823 clat (usec): min=364, max=38548, avg=6527.32, stdev=6342.73 00:15:18.823 lat (usec): min=372, max=38554, avg=6534.11, stdev=6343.37 00:15:18.823 clat percentiles (usec): 00:15:18.823 | 1.00th=[ 734], 5.00th=[ 873], 10.00th=[ 1254], 20.00th=[ 2671], 00:15:18.823 | 30.00th=[ 3326], 40.00th=[ 4080], 50.00th=[ 4817], 60.00th=[ 5407], 00:15:18.823 | 70.00th=[ 5997], 80.00th=[ 7504], 90.00th=[15795], 95.00th=[20055], 00:15:18.823 | 99.00th=[31327], 99.50th=[33162], 99.90th=[35390], 99.95th=[36963], 00:15:18.823 | 99.99th=[38011] 00:15:18.823 bw ( KiB/s): min= 896, max=41496, per=99.95%, avg=23667.27, stdev=14011.98, samples=22 00:15:18.823 iops : min= 224, max=10374, avg=5916.82, stdev=3502.99, samples=22 00:15:18.823 lat (usec) : 500=0.04%, 750=0.67%, 1000=2.87% 00:15:18.823 lat (msec) : 2=3.41%, 4=12.57%, 10=22.60%, 20=6.69%, 50=47.73% 00:15:18.823 lat (msec) : 100=1.71%, 250=1.65%, 500=0.07% 00:15:18.823 cpu : usr=99.27%, sys=0.17%, ctx=45, majf=0, minf=5553 00:15:18.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:18.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.823 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.823 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.823 second_half: (groupid=0, jobs=1): err= 0: pid=84161: Tue Nov 19 23:28:01 2024 00:15:18.823 read: IOPS=2984, BW=11.7MiB/s (12.2MB/s)(256MiB/21940msec) 00:15:18.823 slat (nsec): min=2998, max=38354, avg=4214.34, stdev=1274.16 00:15:18.823 clat (msec): min=9, max=260, avg=36.90, stdev=20.09 00:15:18.823 lat (msec): min=9, max=260, avg=36.90, stdev=20.09 00:15:18.823 clat percentiles (msec): 00:15:18.823 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:15:18.823 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 34], 00:15:18.823 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 67], 00:15:18.823 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 213], 00:15:18.823 | 99.99th=[ 251] 00:15:18.823 write: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(256MiB/21812msec); 0 zone resets 00:15:18.823 slat (usec): min=3, max=1282, avg= 5.67, stdev= 6.38 00:15:18.823 clat (usec): min=368, max=42611, avg=5960.53, stdev=4430.09 00:15:18.823 lat (usec): min=375, max=42616, avg=5966.20, stdev=4431.02 00:15:18.823 clat percentiles (usec): 00:15:18.823 | 1.00th=[ 898], 5.00th=[ 1795], 10.00th=[ 2442], 20.00th=[ 3163], 00:15:18.823 | 30.00th=[ 3785], 40.00th=[ 4359], 50.00th=[ 4817], 60.00th=[ 5276], 00:15:18.823 | 70.00th=[ 5538], 80.00th=[ 6521], 90.00th=[12649], 95.00th=[16319], 00:15:18.823 | 99.00th=[20579], 99.50th=[23987], 99.90th=[34341], 99.95th=[40109], 00:15:18.823 | 99.99th=[41681] 00:15:18.823 bw ( KiB/s): min= 504, max=47616, per=99.93%, avg=23663.27, stdev=15145.44, samples=22 00:15:18.823 iops : min= 126, max=11904, avg=5915.82, stdev=3786.36, samples=22 00:15:18.823 lat (usec) : 500=0.03%, 750=0.18%, 1000=0.53% 00:15:18.823 lat (msec) : 2=2.27%, 4=13.70%, 10=25.89%, 20=6.90%, 50=47.24% 00:15:18.823 lat (msec) : 100=1.73%, 250=1.54%, 500=0.01% 00:15:18.823 cpu : usr=99.33%, sys=0.09%, ctx=27, majf=0, minf=5587 00:15:18.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:18.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.823 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:18.823 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:18.823 00:15:18.823 Run status group 0 (all jobs): 00:15:18.823 READ: bw=23.0MiB/s (24.2MB/s), 11.5MiB/s-11.7MiB/s (12.1MB/s-12.2MB/s), io=512MiB (536MB), run=21940-22196msec 00:15:18.823 WRITE: bw=23.1MiB/s (24.2MB/s), 11.6MiB/s-11.7MiB/s (12.1MB/s-12.3MB/s), io=512MiB (537MB), run=21812-22142msec 00:15:18.823 ----------------------------------------------------- 00:15:18.823 Suppressions used: 00:15:18.823 count bytes template 00:15:18.823 2 10 /usr/src/fio/parse.c 00:15:18.823 3 288 /usr/src/fio/iolog.c 00:15:18.823 1 8 libtcmalloc_minimal.so 00:15:18.823 1 904 libcrypto.so 00:15:18.823 ----------------------------------------------------- 00:15:18.823 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:18.823 23:28:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:18.823 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:18.823 fio-3.35 00:15:18.823 Starting 1 thread 00:15:36.923 00:15:36.923 test: (groupid=0, jobs=1): err= 0: pid=84447: Tue Nov 19 23:28:20 2024 00:15:36.923 read: IOPS=6863, BW=26.8MiB/s (28.1MB/s)(255MiB/9500msec) 00:15:36.923 slat (nsec): min=3002, max=35898, avg=5660.25, stdev=2373.35 00:15:36.923 clat (usec): min=1111, max=37615, avg=18639.03, stdev=3819.39 00:15:36.923 lat (usec): min=1125, max=37623, avg=18644.69, stdev=3820.48 00:15:36.923 clat percentiles (usec): 00:15:36.923 | 1.00th=[14091], 5.00th=[14615], 10.00th=[15270], 20.00th=[15664], 00:15:36.923 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16909], 60.00th=[18482], 00:15:36.923 | 70.00th=[20317], 80.00th=[22152], 90.00th=[24249], 95.00th=[26084], 00:15:36.923 | 99.00th=[30016], 99.50th=[31327], 99.90th=[34866], 99.95th=[35390], 00:15:36.923 | 99.99th=[36439] 00:15:36.923 write: IOPS=8988, BW=35.1MiB/s (36.8MB/s)(256MiB/7291msec); 0 zone resets 00:15:36.923 slat (usec): min=4, max=867, avg= 7.82, stdev= 7.98 00:15:36.923 clat (usec): min=529, max=93084, avg=14181.77, stdev=16458.45 00:15:36.923 lat (usec): min=536, max=93094, avg=14189.59, stdev=16458.66 00:15:36.923 clat percentiles (usec): 00:15:36.923 | 1.00th=[ 873], 5.00th=[ 1221], 10.00th=[ 1450], 20.00th=[ 1778], 00:15:36.923 | 30.00th=[ 2180], 40.00th=[ 3097], 50.00th=[ 8717], 60.00th=[12256], 00:15:36.923 | 70.00th=[16188], 80.00th=[19268], 90.00th=[46400], 95.00th=[53216], 00:15:36.923 | 99.00th=[61604], 99.50th=[63701], 99.90th=[69731], 99.95th=[77071], 00:15:36.923 | 99.99th=[86508] 00:15:36.923 bw ( KiB/s): min=15392, max=56832, per=97.21%, avg=34952.53, stdev=9574.33, samples=15 00:15:36.923 iops : min= 3848, max=14208, avg=8738.13, stdev=2393.58, samples=15 00:15:36.923 lat (usec) : 750=0.17%, 1000=0.91% 00:15:36.923 lat (msec) : 2=11.97%, 4=7.86%, 10=5.81%, 20=48.32%, 50=21.28% 00:15:36.923 lat (msec) : 100=3.69% 00:15:36.923 cpu : usr=98.88%, sys=0.23%, ctx=35, majf=0, minf=5577 00:15:36.923 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:36.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.923 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:36.923 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.923 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:36.923 00:15:36.923 Run status group 0 (all jobs): 00:15:36.923 READ: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=255MiB (267MB), run=9500-9500msec 00:15:36.923 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=256MiB (268MB), run=7291-7291msec 00:15:36.923 ----------------------------------------------------- 00:15:36.923 Suppressions used: 00:15:36.923 count bytes template 00:15:36.923 1 5 /usr/src/fio/parse.c 00:15:36.923 2 192 /usr/src/fio/iolog.c 00:15:36.923 1 8 libtcmalloc_minimal.so 00:15:36.923 1 904 libcrypto.so 00:15:36.923 ----------------------------------------------------- 00:15:36.923 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:36.923 Remove shared memory files 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69417 /dev/shm/spdk_tgt_trace.pid82835 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:36.923 ************************************ 00:15:36.923 END TEST ftl_fio_basic 00:15:36.923 ************************************ 00:15:36.923 00:15:36.923 real 1m1.519s 00:15:36.923 user 2m16.617s 00:15:36.923 sys 0m2.792s 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.923 23:28:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:36.923 23:28:21 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:36.923 23:28:21 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:36.923 23:28:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.923 23:28:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:36.923 ************************************ 00:15:36.923 START TEST ftl_bdevperf 00:15:36.923 ************************************ 00:15:36.923 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:36.923 * Looking for test storage... 00:15:36.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.924 --rc genhtml_branch_coverage=1 00:15:36.924 --rc genhtml_function_coverage=1 00:15:36.924 --rc genhtml_legend=1 00:15:36.924 --rc geninfo_all_blocks=1 00:15:36.924 --rc geninfo_unexecuted_blocks=1 00:15:36.924 00:15:36.924 ' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.924 --rc genhtml_branch_coverage=1 00:15:36.924 --rc genhtml_function_coverage=1 00:15:36.924 --rc genhtml_legend=1 00:15:36.924 --rc geninfo_all_blocks=1 00:15:36.924 --rc geninfo_unexecuted_blocks=1 00:15:36.924 00:15:36.924 ' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.924 --rc genhtml_branch_coverage=1 00:15:36.924 --rc genhtml_function_coverage=1 00:15:36.924 --rc genhtml_legend=1 00:15:36.924 --rc geninfo_all_blocks=1 00:15:36.924 --rc geninfo_unexecuted_blocks=1 00:15:36.924 00:15:36.924 ' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:36.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.924 --rc genhtml_branch_coverage=1 00:15:36.924 --rc genhtml_function_coverage=1 00:15:36.924 --rc genhtml_legend=1 00:15:36.924 --rc geninfo_all_blocks=1 00:15:36.924 --rc geninfo_unexecuted_blocks=1 00:15:36.924 00:15:36.924 ' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:36.924 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84713 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84713 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 84713 ']' 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.925 23:28:21 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:36.925 [2024-11-19 23:28:21.666108] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:15:36.925 [2024-11-19 23:28:21.666430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84713 ] 00:15:36.925 [2024-11-19 23:28:21.836531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.925 [2024-11-19 23:28:21.864935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:36.925 23:28:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:36.925 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:36.925 { 00:15:36.925 "name": "nvme0n1", 00:15:36.925 "aliases": [ 00:15:36.925 "44f6d73c-852c-407a-b687-c3f56d0b3709" 00:15:36.925 ], 00:15:36.925 "product_name": "NVMe disk", 00:15:36.925 "block_size": 4096, 00:15:36.925 "num_blocks": 1310720, 00:15:36.925 "uuid": "44f6d73c-852c-407a-b687-c3f56d0b3709", 00:15:36.925 "numa_id": -1, 00:15:36.925 "assigned_rate_limits": { 00:15:36.925 "rw_ios_per_sec": 0, 00:15:36.925 "rw_mbytes_per_sec": 0, 00:15:36.925 "r_mbytes_per_sec": 0, 00:15:36.925 "w_mbytes_per_sec": 0 00:15:36.925 }, 00:15:36.925 "claimed": true, 00:15:36.925 "claim_type": "read_many_write_one", 00:15:36.925 "zoned": false, 00:15:36.925 "supported_io_types": { 00:15:36.925 "read": true, 00:15:36.925 "write": true, 00:15:36.925 "unmap": true, 00:15:36.925 "flush": true, 00:15:36.925 "reset": true, 00:15:36.925 "nvme_admin": true, 00:15:36.925 "nvme_io": true, 00:15:36.925 "nvme_io_md": false, 00:15:36.925 "write_zeroes": true, 00:15:36.925 "zcopy": false, 00:15:36.925 "get_zone_info": false, 00:15:36.925 "zone_management": false, 00:15:36.925 "zone_append": false, 00:15:36.925 "compare": true, 00:15:36.925 "compare_and_write": false, 00:15:36.925 "abort": true, 00:15:36.925 "seek_hole": false, 00:15:36.925 "seek_data": false, 00:15:36.925 "copy": true, 00:15:36.925 "nvme_iov_md": false 00:15:36.925 }, 00:15:36.925 "driver_specific": { 00:15:36.925 "nvme": [ 00:15:36.925 { 00:15:36.925 "pci_address": "0000:00:11.0", 00:15:36.925 "trid": { 00:15:36.925 "trtype": "PCIe", 00:15:36.925 "traddr": "0000:00:11.0" 00:15:36.925 }, 00:15:36.925 "ctrlr_data": { 00:15:36.925 "cntlid": 0, 00:15:36.925 "vendor_id": "0x1b36", 00:15:36.925 "model_number": "QEMU NVMe Ctrl", 00:15:36.925 "serial_number": "12341", 00:15:36.925 "firmware_revision": "8.0.0", 00:15:36.925 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:36.925 "oacs": { 00:15:36.925 "security": 0, 00:15:36.925 "format": 1, 00:15:36.925 "firmware": 0, 00:15:36.925 "ns_manage": 1 00:15:36.925 }, 00:15:36.925 "multi_ctrlr": false, 00:15:36.925 "ana_reporting": false 00:15:36.925 }, 00:15:36.925 "vs": { 00:15:36.925 "nvme_version": "1.4" 00:15:36.925 }, 00:15:36.925 "ns_data": { 00:15:36.925 "id": 1, 00:15:36.925 "can_share": false 00:15:36.925 } 00:15:36.925 } 00:15:36.925 ], 00:15:36.925 "mp_policy": "active_passive" 00:15:36.926 } 00:15:36.926 } 00:15:36.926 ]' 00:15:36.926 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:36.926 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:36.926 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e34916f3-8243-4a87-8730-d770f1fd342d 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:37.187 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e34916f3-8243-4a87-8730-d770f1fd342d 00:15:37.449 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:37.710 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b6d5af07-8312-4f7a-ba72-8654fb2281c7 00:15:37.710 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b6d5af07-8312-4f7a-ba72-8654fb2281c7 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e2989a6a-3641-48a4-b120-478133721ed7 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e2989a6a-3641-48a4-b120-478133721ed7 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e2989a6a-3641-48a4-b120-478133721ed7 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e2989a6a-3641-48a4-b120-478133721ed7 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e2989a6a-3641-48a4-b120-478133721ed7 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:37.970 23:28:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2989a6a-3641-48a4-b120-478133721ed7 00:15:38.231 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.231 { 00:15:38.231 "name": "e2989a6a-3641-48a4-b120-478133721ed7", 00:15:38.231 "aliases": [ 00:15:38.231 "lvs/nvme0n1p0" 00:15:38.231 ], 00:15:38.231 "product_name": "Logical Volume", 00:15:38.231 "block_size": 4096, 00:15:38.231 "num_blocks": 26476544, 00:15:38.232 "uuid": "e2989a6a-3641-48a4-b120-478133721ed7", 00:15:38.232 "assigned_rate_limits": { 00:15:38.232 "rw_ios_per_sec": 0, 00:15:38.232 "rw_mbytes_per_sec": 0, 00:15:38.232 "r_mbytes_per_sec": 0, 00:15:38.232 "w_mbytes_per_sec": 0 00:15:38.232 }, 00:15:38.232 "claimed": false, 00:15:38.232 "zoned": false, 00:15:38.232 "supported_io_types": { 00:15:38.232 "read": true, 00:15:38.232 "write": true, 00:15:38.232 "unmap": true, 00:15:38.232 "flush": false, 00:15:38.232 "reset": true, 00:15:38.232 "nvme_admin": false, 00:15:38.232 "nvme_io": false, 00:15:38.232 "nvme_io_md": false, 00:15:38.232 "write_zeroes": true, 00:15:38.232 "zcopy": false, 00:15:38.232 "get_zone_info": false, 00:15:38.232 "zone_management": false, 00:15:38.232 "zone_append": false, 00:15:38.232 "compare": false, 00:15:38.232 "compare_and_write": false, 00:15:38.232 "abort": false, 00:15:38.232 "seek_hole": true, 00:15:38.232 "seek_data": true, 00:15:38.232 "copy": false, 00:15:38.232 "nvme_iov_md": false 00:15:38.232 }, 00:15:38.232 "driver_specific": { 00:15:38.232 "lvol": { 00:15:38.232 "lvol_store_uuid": "b6d5af07-8312-4f7a-ba72-8654fb2281c7", 00:15:38.232 "base_bdev": "nvme0n1", 00:15:38.232 "thin_provision": true, 00:15:38.232 "num_allocated_clusters": 0, 00:15:38.232 "snapshot": false, 00:15:38.232 "clone": false, 00:15:38.232 "esnap_clone": false 00:15:38.232 } 00:15:38.232 } 00:15:38.232 } 00:15:38.232 ]' 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:38.232 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e2989a6a-3641-48a4-b120-478133721ed7 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e2989a6a-3641-48a4-b120-478133721ed7 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:38.494 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2989a6a-3641-48a4-b120-478133721ed7 00:15:38.754 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:38.754 { 00:15:38.754 "name": "e2989a6a-3641-48a4-b120-478133721ed7", 00:15:38.754 "aliases": [ 00:15:38.754 "lvs/nvme0n1p0" 00:15:38.754 ], 00:15:38.754 "product_name": "Logical Volume", 00:15:38.754 "block_size": 4096, 00:15:38.754 "num_blocks": 26476544, 00:15:38.754 "uuid": "e2989a6a-3641-48a4-b120-478133721ed7", 00:15:38.754 "assigned_rate_limits": { 00:15:38.754 "rw_ios_per_sec": 0, 00:15:38.754 "rw_mbytes_per_sec": 0, 00:15:38.754 "r_mbytes_per_sec": 0, 00:15:38.754 "w_mbytes_per_sec": 0 00:15:38.754 }, 00:15:38.755 "claimed": false, 00:15:38.755 "zoned": false, 00:15:38.755 "supported_io_types": { 00:15:38.755 "read": true, 00:15:38.755 "write": true, 00:15:38.755 "unmap": true, 00:15:38.755 "flush": false, 00:15:38.755 "reset": true, 00:15:38.755 "nvme_admin": false, 00:15:38.755 "nvme_io": false, 00:15:38.755 "nvme_io_md": false, 00:15:38.755 "write_zeroes": true, 00:15:38.755 "zcopy": false, 00:15:38.755 "get_zone_info": false, 00:15:38.755 "zone_management": false, 00:15:38.755 "zone_append": false, 00:15:38.755 "compare": false, 00:15:38.755 "compare_and_write": false, 00:15:38.755 "abort": false, 00:15:38.755 "seek_hole": true, 00:15:38.755 "seek_data": true, 00:15:38.755 "copy": false, 00:15:38.755 "nvme_iov_md": false 00:15:38.755 }, 00:15:38.755 "driver_specific": { 00:15:38.755 "lvol": { 00:15:38.755 "lvol_store_uuid": "b6d5af07-8312-4f7a-ba72-8654fb2281c7", 00:15:38.755 "base_bdev": "nvme0n1", 00:15:38.755 "thin_provision": true, 00:15:38.755 "num_allocated_clusters": 0, 00:15:38.755 "snapshot": false, 00:15:38.755 "clone": false, 00:15:38.755 "esnap_clone": false 00:15:38.755 } 00:15:38.755 } 00:15:38.755 } 00:15:38.755 ]' 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:38.755 23:28:24 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e2989a6a-3641-48a4-b120-478133721ed7 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e2989a6a-3641-48a4-b120-478133721ed7 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:15:39.016 23:28:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e2989a6a-3641-48a4-b120-478133721ed7 00:15:39.016 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:39.016 { 00:15:39.016 "name": "e2989a6a-3641-48a4-b120-478133721ed7", 00:15:39.016 "aliases": [ 00:15:39.016 "lvs/nvme0n1p0" 00:15:39.016 ], 00:15:39.016 "product_name": "Logical Volume", 00:15:39.016 "block_size": 4096, 00:15:39.016 "num_blocks": 26476544, 00:15:39.016 "uuid": "e2989a6a-3641-48a4-b120-478133721ed7", 00:15:39.016 "assigned_rate_limits": { 00:15:39.016 "rw_ios_per_sec": 0, 00:15:39.016 "rw_mbytes_per_sec": 0, 00:15:39.016 "r_mbytes_per_sec": 0, 00:15:39.016 "w_mbytes_per_sec": 0 00:15:39.016 }, 00:15:39.016 "claimed": false, 00:15:39.016 "zoned": false, 00:15:39.016 "supported_io_types": { 00:15:39.016 "read": true, 00:15:39.016 "write": true, 00:15:39.016 "unmap": true, 00:15:39.016 "flush": false, 00:15:39.016 "reset": true, 00:15:39.016 "nvme_admin": false, 00:15:39.016 "nvme_io": false, 00:15:39.016 "nvme_io_md": false, 00:15:39.016 "write_zeroes": true, 00:15:39.016 "zcopy": false, 00:15:39.016 "get_zone_info": false, 00:15:39.016 "zone_management": false, 00:15:39.016 "zone_append": false, 00:15:39.016 "compare": false, 00:15:39.016 "compare_and_write": false, 00:15:39.016 "abort": false, 00:15:39.016 "seek_hole": true, 00:15:39.016 "seek_data": true, 00:15:39.016 "copy": false, 00:15:39.016 "nvme_iov_md": false 00:15:39.016 }, 00:15:39.016 "driver_specific": { 00:15:39.016 "lvol": { 00:15:39.016 "lvol_store_uuid": "b6d5af07-8312-4f7a-ba72-8654fb2281c7", 00:15:39.016 "base_bdev": "nvme0n1", 00:15:39.016 "thin_provision": true, 00:15:39.016 "num_allocated_clusters": 0, 00:15:39.016 "snapshot": false, 00:15:39.016 "clone": false, 00:15:39.016 "esnap_clone": false 00:15:39.016 } 00:15:39.016 } 00:15:39.016 } 00:15:39.016 ]' 00:15:39.016 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:39.277 23:28:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e2989a6a-3641-48a4-b120-478133721ed7 -c nvc0n1p0 --l2p_dram_limit 20 00:15:39.277 [2024-11-19 23:28:25.446316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.446436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:39.277 [2024-11-19 23:28:25.446454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:39.277 [2024-11-19 23:28:25.446461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.446510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.446521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:39.277 [2024-11-19 23:28:25.446534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:39.277 [2024-11-19 23:28:25.446540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.446559] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:39.277 [2024-11-19 23:28:25.446766] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:39.277 [2024-11-19 23:28:25.446783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.446789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:39.277 [2024-11-19 23:28:25.446801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:15:39.277 [2024-11-19 23:28:25.446807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.446858] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c6dc0359-fdbc-451c-8a0a-ed004f0c65ce 00:15:39.277 [2024-11-19 23:28:25.447792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.447809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:39.277 [2024-11-19 23:28:25.447817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:39.277 [2024-11-19 23:28:25.447824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.452410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.452438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:39.277 [2024-11-19 23:28:25.452446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.559 ms 00:15:39.277 [2024-11-19 23:28:25.452458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.452512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.452520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:39.277 [2024-11-19 23:28:25.452526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:15:39.277 [2024-11-19 23:28:25.452538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.452565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.452574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:39.277 [2024-11-19 23:28:25.452579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:39.277 [2024-11-19 23:28:25.452587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.452604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:39.277 [2024-11-19 23:28:25.453860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.453885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:39.277 [2024-11-19 23:28:25.453895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:15:39.277 [2024-11-19 23:28:25.453901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.453925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.453931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:39.277 [2024-11-19 23:28:25.453940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:39.277 [2024-11-19 23:28:25.453946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.453958] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:39.277 [2024-11-19 23:28:25.454064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:39.277 [2024-11-19 23:28:25.454075] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:39.277 [2024-11-19 23:28:25.454086] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:39.277 [2024-11-19 23:28:25.454098] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454104] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:39.277 [2024-11-19 23:28:25.454119] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:39.277 [2024-11-19 23:28:25.454125] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:39.277 [2024-11-19 23:28:25.454131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:39.277 [2024-11-19 23:28:25.454140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.454145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:39.277 [2024-11-19 23:28:25.454153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:15:39.277 [2024-11-19 23:28:25.454158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.454222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.454228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:39.277 [2024-11-19 23:28:25.454235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:39.277 [2024-11-19 23:28:25.454240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.454309] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:39.277 [2024-11-19 23:28:25.454317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:39.277 [2024-11-19 23:28:25.454324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:39.277 [2024-11-19 23:28:25.454346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:39.277 [2024-11-19 23:28:25.454365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.277 [2024-11-19 23:28:25.454378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:39.277 [2024-11-19 23:28:25.454384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:39.277 [2024-11-19 23:28:25.454392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.277 [2024-11-19 23:28:25.454397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:39.277 [2024-11-19 23:28:25.454405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:39.277 [2024-11-19 23:28:25.454411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:39.277 [2024-11-19 23:28:25.454422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:39.277 [2024-11-19 23:28:25.454440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:39.277 [2024-11-19 23:28:25.454456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:39.277 [2024-11-19 23:28:25.454473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:39.277 [2024-11-19 23:28:25.454490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:39.277 [2024-11-19 23:28:25.454509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.277 [2024-11-19 23:28:25.454522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:39.277 [2024-11-19 23:28:25.454528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:39.277 [2024-11-19 23:28:25.454536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.277 [2024-11-19 23:28:25.454541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:39.277 [2024-11-19 23:28:25.454548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:39.277 [2024-11-19 23:28:25.454554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:39.277 [2024-11-19 23:28:25.454566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:39.277 [2024-11-19 23:28:25.454573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454578] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:39.277 [2024-11-19 23:28:25.454587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:39.277 [2024-11-19 23:28:25.454593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.277 [2024-11-19 23:28:25.454609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:39.277 [2024-11-19 23:28:25.454615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:39.277 [2024-11-19 23:28:25.454620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:39.277 [2024-11-19 23:28:25.454626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:39.277 [2024-11-19 23:28:25.454631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:39.277 [2024-11-19 23:28:25.454637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:39.277 [2024-11-19 23:28:25.454644] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:39.277 [2024-11-19 23:28:25.454652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:39.277 [2024-11-19 23:28:25.454665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:39.277 [2024-11-19 23:28:25.454670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:39.277 [2024-11-19 23:28:25.454676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:39.277 [2024-11-19 23:28:25.454682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:39.277 [2024-11-19 23:28:25.454691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:39.277 [2024-11-19 23:28:25.454696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:39.277 [2024-11-19 23:28:25.454703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:39.277 [2024-11-19 23:28:25.454707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:39.277 [2024-11-19 23:28:25.454714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:39.277 [2024-11-19 23:28:25.454752] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:39.277 [2024-11-19 23:28:25.454760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:39.277 [2024-11-19 23:28:25.454774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:39.277 [2024-11-19 23:28:25.454780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:39.277 [2024-11-19 23:28:25.454787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:39.277 [2024-11-19 23:28:25.454793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.277 [2024-11-19 23:28:25.454801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:39.277 [2024-11-19 23:28:25.454809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:15:39.277 [2024-11-19 23:28:25.454816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.277 [2024-11-19 23:28:25.454839] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:39.277 [2024-11-19 23:28:25.454849] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:43.490 [2024-11-19 23:28:29.449270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.449783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:43.490 [2024-11-19 23:28:29.449895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3994.410 ms 00:15:43.490 [2024-11-19 23:28:29.449932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.464028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.464257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:43.490 [2024-11-19 23:28:29.464360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.933 ms 00:15:43.490 [2024-11-19 23:28:29.464391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.464540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.464570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:43.490 [2024-11-19 23:28:29.464654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:15:43.490 [2024-11-19 23:28:29.464684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.484934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.485011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:43.490 [2024-11-19 23:28:29.485029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.159 ms 00:15:43.490 [2024-11-19 23:28:29.485044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.485094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.485110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:43.490 [2024-11-19 23:28:29.485127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:43.490 [2024-11-19 23:28:29.485141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.485725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.485796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:43.490 [2024-11-19 23:28:29.485812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:15:43.490 [2024-11-19 23:28:29.485832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.486002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.486018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:43.490 [2024-11-19 23:28:29.486031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:15:43.490 [2024-11-19 23:28:29.486048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.494916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.494976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:43.490 [2024-11-19 23:28:29.494990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.844 ms 00:15:43.490 [2024-11-19 23:28:29.495003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.505178] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:43.490 [2024-11-19 23:28:29.513010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.513056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:43.490 [2024-11-19 23:28:29.513077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.894 ms 00:15:43.490 [2024-11-19 23:28:29.513085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.600922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.600990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:43.490 [2024-11-19 23:28:29.601009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.784 ms 00:15:43.490 [2024-11-19 23:28:29.601018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.601225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.601236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:43.490 [2024-11-19 23:28:29.601247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:43.490 [2024-11-19 23:28:29.601257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.607190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.607224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:43.490 [2024-11-19 23:28:29.607236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.911 ms 00:15:43.490 [2024-11-19 23:28:29.607244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.610607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.610641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:43.490 [2024-11-19 23:28:29.610653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:15:43.490 [2024-11-19 23:28:29.610660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.610973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.610988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:43.490 [2024-11-19 23:28:29.611001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:15:43.490 [2024-11-19 23:28:29.611008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.644188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.644324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:43.490 [2024-11-19 23:28:29.644344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.159 ms 00:15:43.490 [2024-11-19 23:28:29.644352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.649401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.649444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:43.490 [2024-11-19 23:28:29.649456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:15:43.490 [2024-11-19 23:28:29.649464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.653375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.653409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:43.490 [2024-11-19 23:28:29.653421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:15:43.490 [2024-11-19 23:28:29.653428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.657842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.657875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:43.490 [2024-11-19 23:28:29.657888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.379 ms 00:15:43.490 [2024-11-19 23:28:29.657895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.657933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.657942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:43.490 [2024-11-19 23:28:29.657954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:43.490 [2024-11-19 23:28:29.657961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.658022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.490 [2024-11-19 23:28:29.658031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:43.490 [2024-11-19 23:28:29.658044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:43.490 [2024-11-19 23:28:29.658051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.490 [2024-11-19 23:28:29.658882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4212.158 ms, result 0 00:15:43.490 { 00:15:43.490 "name": "ftl0", 00:15:43.490 "uuid": "c6dc0359-fdbc-451c-8a0a-ed004f0c65ce" 00:15:43.490 } 00:15:43.751 23:28:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:43.751 23:28:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:43.751 23:28:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:43.751 23:28:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:44.013 [2024-11-19 23:28:29.983973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:44.013 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:44.013 Zero copy mechanism will not be used. 00:15:44.013 Running I/O for 4 seconds... 00:15:45.903 970.00 IOPS, 64.41 MiB/s [2024-11-19T23:28:33.039Z] 840.00 IOPS, 55.78 MiB/s [2024-11-19T23:28:34.426Z] 795.67 IOPS, 52.84 MiB/s [2024-11-19T23:28:34.426Z] 961.75 IOPS, 63.87 MiB/s 00:15:48.234 Latency(us) 00:15:48.234 [2024-11-19T23:28:34.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.234 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:48.234 ftl0 : 4.00 961.66 63.86 0.00 0.00 1099.90 178.02 3629.69 00:15:48.234 [2024-11-19T23:28:34.426Z] =================================================================================================================== 00:15:48.234 [2024-11-19T23:28:34.426Z] Total : 961.66 63.86 0.00 0.00 1099.90 178.02 3629.69 00:15:48.234 [2024-11-19 23:28:33.991557] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:48.234 { 00:15:48.234 "results": [ 00:15:48.234 { 00:15:48.234 "job": "ftl0", 00:15:48.234 "core_mask": "0x1", 00:15:48.234 "workload": "randwrite", 00:15:48.234 "status": "finished", 00:15:48.234 "queue_depth": 1, 00:15:48.234 "io_size": 69632, 00:15:48.234 "runtime": 4.001415, 00:15:48.234 "iops": 961.6598128412074, 00:15:48.234 "mibps": 63.86022194648643, 00:15:48.234 "io_failed": 0, 00:15:48.234 "io_timeout": 0, 00:15:48.234 "avg_latency_us": 1099.9001311370541, 00:15:48.234 "min_latency_us": 178.01846153846154, 00:15:48.234 "max_latency_us": 3629.686153846154 00:15:48.234 } 00:15:48.234 ], 00:15:48.234 "core_count": 1 00:15:48.234 } 00:15:48.234 23:28:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:48.234 [2024-11-19 23:28:34.096070] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:48.234 Running I/O for 4 seconds... 00:15:50.112 8184.00 IOPS, 31.97 MiB/s [2024-11-19T23:28:37.292Z] 7166.00 IOPS, 27.99 MiB/s [2024-11-19T23:28:38.297Z] 7001.67 IOPS, 27.35 MiB/s [2024-11-19T23:28:38.297Z] 6771.25 IOPS, 26.45 MiB/s 00:15:52.105 Latency(us) 00:15:52.105 [2024-11-19T23:28:38.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.105 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.105 ftl0 : 4.04 6743.17 26.34 0.00 0.00 18911.63 258.36 45169.43 00:15:52.105 [2024-11-19T23:28:38.297Z] =================================================================================================================== 00:15:52.105 [2024-11-19T23:28:38.297Z] Total : 6743.17 26.34 0.00 0.00 18911.63 0.00 45169.43 00:15:52.105 [2024-11-19 23:28:38.137339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:52.105 { 00:15:52.105 "results": [ 00:15:52.105 { 00:15:52.105 "job": "ftl0", 00:15:52.105 "core_mask": "0x1", 00:15:52.105 "workload": "randwrite", 00:15:52.105 "status": "finished", 00:15:52.105 "queue_depth": 128, 00:15:52.105 "io_size": 4096, 00:15:52.105 "runtime": 4.035341, 00:15:52.105 "iops": 6743.172386175047, 00:15:52.105 "mibps": 26.340517133496277, 00:15:52.105 "io_failed": 0, 00:15:52.105 "io_timeout": 0, 00:15:52.105 "avg_latency_us": 18911.625646415618, 00:15:52.105 "min_latency_us": 258.3630769230769, 00:15:52.105 "max_latency_us": 45169.42769230769 00:15:52.105 } 00:15:52.105 ], 00:15:52.105 "core_count": 1 00:15:52.105 } 00:15:52.105 23:28:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:52.105 [2024-11-19 23:28:38.240639] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:52.105 Running I/O for 4 seconds... 00:15:54.065 4635.00 IOPS, 18.11 MiB/s [2024-11-19T23:28:41.644Z] 4745.50 IOPS, 18.54 MiB/s [2024-11-19T23:28:42.589Z] 5227.67 IOPS, 20.42 MiB/s [2024-11-19T23:28:42.589Z] 5266.50 IOPS, 20.57 MiB/s 00:15:56.397 Latency(us) 00:15:56.397 [2024-11-19T23:28:42.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.398 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:56.398 Verification LBA range: start 0x0 length 0x1400000 00:15:56.398 ftl0 : 4.02 5278.58 20.62 0.00 0.00 24179.55 228.43 38918.30 00:15:56.398 [2024-11-19T23:28:42.590Z] =================================================================================================================== 00:15:56.398 [2024-11-19T23:28:42.590Z] Total : 5278.58 20.62 0.00 0.00 24179.55 0.00 38918.30 00:15:56.398 [2024-11-19 23:28:42.263203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:56.398 { 00:15:56.398 "results": [ 00:15:56.398 { 00:15:56.398 "job": "ftl0", 00:15:56.398 "core_mask": "0x1", 00:15:56.398 "workload": "verify", 00:15:56.398 "status": "finished", 00:15:56.398 "verify_range": { 00:15:56.398 "start": 0, 00:15:56.398 "length": 20971520 00:15:56.398 }, 00:15:56.398 "queue_depth": 128, 00:15:56.398 "io_size": 4096, 00:15:56.398 "runtime": 4.015097, 00:15:56.398 "iops": 5278.577329514082, 00:15:56.398 "mibps": 20.61944269341438, 00:15:56.398 "io_failed": 0, 00:15:56.398 "io_timeout": 0, 00:15:56.398 "avg_latency_us": 24179.54997190787, 00:15:56.398 "min_latency_us": 228.43076923076924, 00:15:56.398 "max_latency_us": 38918.301538461536 00:15:56.398 } 00:15:56.398 ], 00:15:56.398 "core_count": 1 00:15:56.398 } 00:15:56.398 23:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:56.398 [2024-11-19 23:28:42.479521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.398 [2024-11-19 23:28:42.479587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:56.398 [2024-11-19 23:28:42.479603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:56.398 [2024-11-19 23:28:42.479612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.398 [2024-11-19 23:28:42.479637] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:56.398 [2024-11-19 23:28:42.480328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.398 [2024-11-19 23:28:42.480371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:56.398 [2024-11-19 23:28:42.480384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:15:56.398 [2024-11-19 23:28:42.480397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.398 [2024-11-19 23:28:42.483463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.398 [2024-11-19 23:28:42.483517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:56.398 [2024-11-19 23:28:42.483535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.039 ms 00:15:56.398 [2024-11-19 23:28:42.483555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.668 [2024-11-19 23:28:42.711620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.669 [2024-11-19 23:28:42.711680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:56.669 [2024-11-19 23:28:42.711693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 228.046 ms 00:15:56.669 [2024-11-19 23:28:42.711712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.669 [2024-11-19 23:28:42.717910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.669 [2024-11-19 23:28:42.717956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:56.669 [2024-11-19 23:28:42.717968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:15:56.669 [2024-11-19 23:28:42.717979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.669 [2024-11-19 23:28:42.720860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.669 [2024-11-19 23:28:42.721066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:56.669 [2024-11-19 23:28:42.721086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.810 ms 00:15:56.669 [2024-11-19 23:28:42.721097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.669 [2024-11-19 23:28:42.727719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.669 [2024-11-19 23:28:42.727928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:56.669 [2024-11-19 23:28:42.727946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.526 ms 00:15:56.669 [2024-11-19 23:28:42.727961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.670 [2024-11-19 23:28:42.728104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.670 [2024-11-19 23:28:42.728118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:56.670 [2024-11-19 23:28:42.728128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:15:56.670 [2024-11-19 23:28:42.728138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.670 [2024-11-19 23:28:42.731317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.670 [2024-11-19 23:28:42.731487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:56.670 [2024-11-19 23:28:42.731504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:15:56.670 [2024-11-19 23:28:42.731513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.670 [2024-11-19 23:28:42.734126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.670 [2024-11-19 23:28:42.734183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:56.670 [2024-11-19 23:28:42.734193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.520 ms 00:15:56.670 [2024-11-19 23:28:42.734203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.671 [2024-11-19 23:28:42.736260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.671 [2024-11-19 23:28:42.736314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:56.671 [2024-11-19 23:28:42.736325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:15:56.671 [2024-11-19 23:28:42.736341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.671 [2024-11-19 23:28:42.738464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.671 [2024-11-19 23:28:42.738517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:56.671 [2024-11-19 23:28:42.738527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:15:56.671 [2024-11-19 23:28:42.738558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.671 [2024-11-19 23:28:42.738597] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:56.671 [2024-11-19 23:28:42.738615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:56.671 [2024-11-19 23:28:42.738626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:56.671 [2024-11-19 23:28:42.738636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:56.671 [2024-11-19 23:28:42.738645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:56.671 [2024-11-19 23:28:42.738655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:56.671 [2024-11-19 23:28:42.738663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:56.671 [2024-11-19 23:28:42.738673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:56.672 [2024-11-19 23:28:42.738800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:56.673 [2024-11-19 23:28:42.738859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:56.674 [2024-11-19 23:28:42.738957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.738967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.738974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.738984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.738991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.739000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.739008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.739020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.739037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.739046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:56.675 [2024-11-19 23:28:42.739053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:56.676 [2024-11-19 23:28:42.739202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:56.677 [2024-11-19 23:28:42.739420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:56.678 [2024-11-19 23:28:42.739546] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:56.678 [2024-11-19 23:28:42.739556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6dc0359-fdbc-451c-8a0a-ed004f0c65ce 00:15:56.678 [2024-11-19 23:28:42.739566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:56.679 [2024-11-19 23:28:42.739574] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:56.679 [2024-11-19 23:28:42.739582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:56.679 [2024-11-19 23:28:42.739595] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:56.679 [2024-11-19 23:28:42.739605] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:56.679 [2024-11-19 23:28:42.739613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:56.679 [2024-11-19 23:28:42.739622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:56.679 [2024-11-19 23:28:42.739629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:56.679 [2024-11-19 23:28:42.739638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:56.679 [2024-11-19 23:28:42.739645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.679 [2024-11-19 23:28:42.739655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:56.679 [2024-11-19 23:28:42.739667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:15:56.679 [2024-11-19 23:28:42.739677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.679 [2024-11-19 23:28:42.741895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.679 [2024-11-19 23:28:42.741931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:56.679 [2024-11-19 23:28:42.741941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:15:56.679 [2024-11-19 23:28:42.741952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.679 [2024-11-19 23:28:42.742059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.679 [2024-11-19 23:28:42.742069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:56.679 [2024-11-19 23:28:42.742081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:15:56.679 [2024-11-19 23:28:42.742093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.679 [2024-11-19 23:28:42.749724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.679 [2024-11-19 23:28:42.749796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:56.679 [2024-11-19 23:28:42.749807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.679 [2024-11-19 23:28:42.749818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.680 [2024-11-19 23:28:42.749888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.680 [2024-11-19 23:28:42.749900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:56.680 [2024-11-19 23:28:42.749912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.680 [2024-11-19 23:28:42.749921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.680 [2024-11-19 23:28:42.749988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.680 [2024-11-19 23:28:42.750001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:56.680 [2024-11-19 23:28:42.750009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.680 [2024-11-19 23:28:42.750018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.680 [2024-11-19 23:28:42.750034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.680 [2024-11-19 23:28:42.750044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:56.680 [2024-11-19 23:28:42.750052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.680 [2024-11-19 23:28:42.750068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.681 [2024-11-19 23:28:42.762985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.681 [2024-11-19 23:28:42.763037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:56.681 [2024-11-19 23:28:42.763048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.681 [2024-11-19 23:28:42.763058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.681 [2024-11-19 23:28:42.773432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.681 [2024-11-19 23:28:42.773640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:56.681 [2024-11-19 23:28:42.773659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.681 [2024-11-19 23:28:42.773673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.681 [2024-11-19 23:28:42.773768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.681 [2024-11-19 23:28:42.773782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:56.681 [2024-11-19 23:28:42.773791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.681 [2024-11-19 23:28:42.773801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.682 [2024-11-19 23:28:42.773843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.682 [2024-11-19 23:28:42.773855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:56.682 [2024-11-19 23:28:42.773864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.682 [2024-11-19 23:28:42.773876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.682 [2024-11-19 23:28:42.773960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.682 [2024-11-19 23:28:42.773972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:56.682 [2024-11-19 23:28:42.773983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.682 [2024-11-19 23:28:42.773993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.682 [2024-11-19 23:28:42.774033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.683 [2024-11-19 23:28:42.774045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:56.683 [2024-11-19 23:28:42.774053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.683 [2024-11-19 23:28:42.774062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.683 [2024-11-19 23:28:42.774103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.683 [2024-11-19 23:28:42.774114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:56.683 [2024-11-19 23:28:42.774123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.683 [2024-11-19 23:28:42.774132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.683 [2024-11-19 23:28:42.774176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:56.683 [2024-11-19 23:28:42.774188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:56.683 [2024-11-19 23:28:42.774198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:56.683 [2024-11-19 23:28:42.774211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.684 [2024-11-19 23:28:42.774351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 294.787 ms, result 0 00:15:56.684 true 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84713 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 84713 ']' 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 84713 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84713 00:15:56.684 killing process with pid 84713 00:15:56.684 Received shutdown signal, test time was about 4.000000 seconds 00:15:56.684 00:15:56.684 Latency(us) 00:15:56.684 [2024-11-19T23:28:42.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.684 [2024-11-19T23:28:42.876Z] =================================================================================================================== 00:15:56.684 [2024-11-19T23:28:42.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84713' 00:15:56.684 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 84713 00:15:56.685 23:28:42 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 84713 00:15:56.958 Remove shared memory files 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:56.958 23:28:43 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:15:57.220 ************************************ 00:15:57.220 END TEST ftl_bdevperf 00:15:57.220 ************************************ 00:15:57.220 00:15:57.220 real 0m21.738s 00:15:57.220 user 0m24.369s 00:15:57.220 sys 0m0.953s 00:15:57.220 23:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.220 23:28:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:57.220 23:28:43 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:57.220 23:28:43 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:57.220 23:28:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.220 23:28:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:57.220 ************************************ 00:15:57.220 START TEST ftl_trim 00:15:57.220 ************************************ 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:57.220 * Looking for test storage... 00:15:57.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.220 23:28:43 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.220 --rc genhtml_branch_coverage=1 00:15:57.220 --rc genhtml_function_coverage=1 00:15:57.220 --rc genhtml_legend=1 00:15:57.220 --rc geninfo_all_blocks=1 00:15:57.220 --rc geninfo_unexecuted_blocks=1 00:15:57.220 00:15:57.220 ' 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.220 --rc genhtml_branch_coverage=1 00:15:57.220 --rc genhtml_function_coverage=1 00:15:57.220 --rc genhtml_legend=1 00:15:57.220 --rc geninfo_all_blocks=1 00:15:57.220 --rc geninfo_unexecuted_blocks=1 00:15:57.220 00:15:57.220 ' 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.220 --rc genhtml_branch_coverage=1 00:15:57.220 --rc genhtml_function_coverage=1 00:15:57.220 --rc genhtml_legend=1 00:15:57.220 --rc geninfo_all_blocks=1 00:15:57.220 --rc geninfo_unexecuted_blocks=1 00:15:57.220 00:15:57.220 ' 00:15:57.220 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:57.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.220 --rc genhtml_branch_coverage=1 00:15:57.220 --rc genhtml_function_coverage=1 00:15:57.220 --rc genhtml_legend=1 00:15:57.220 --rc geninfo_all_blocks=1 00:15:57.220 --rc geninfo_unexecuted_blocks=1 00:15:57.220 00:15:57.220 ' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:57.220 23:28:43 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85062 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85062 00:15:57.221 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85062 ']' 00:15:57.221 23:28:43 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:57.221 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.221 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.221 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.221 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.221 23:28:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:15:57.482 [2024-11-19 23:28:43.488514] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:15:57.482 [2024-11-19 23:28:43.488674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85062 ] 00:15:57.482 [2024-11-19 23:28:43.657472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.744 [2024-11-19 23:28:43.688844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.744 [2024-11-19 23:28:43.689150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.744 [2024-11-19 23:28:43.689205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.326 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.326 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:15:58.326 23:28:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:58.326 23:28:44 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:15:58.326 23:28:44 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:58.326 23:28:44 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:15:58.326 23:28:44 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:15:58.327 23:28:44 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:58.589 23:28:44 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:58.589 23:28:44 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:15:58.589 23:28:44 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:58.589 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:15:58.589 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:58.589 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:15:58.589 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:15:58.589 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:58.851 { 00:15:58.851 "name": "nvme0n1", 00:15:58.851 "aliases": [ 00:15:58.851 "a7bd8fe8-5dc6-4832-a174-cdd4fced71f6" 00:15:58.851 ], 00:15:58.851 "product_name": "NVMe disk", 00:15:58.851 "block_size": 4096, 00:15:58.851 "num_blocks": 1310720, 00:15:58.851 "uuid": "a7bd8fe8-5dc6-4832-a174-cdd4fced71f6", 00:15:58.851 "numa_id": -1, 00:15:58.851 "assigned_rate_limits": { 00:15:58.851 "rw_ios_per_sec": 0, 00:15:58.851 "rw_mbytes_per_sec": 0, 00:15:58.851 "r_mbytes_per_sec": 0, 00:15:58.851 "w_mbytes_per_sec": 0 00:15:58.851 }, 00:15:58.851 "claimed": true, 00:15:58.851 "claim_type": "read_many_write_one", 00:15:58.851 "zoned": false, 00:15:58.851 "supported_io_types": { 00:15:58.851 "read": true, 00:15:58.851 "write": true, 00:15:58.851 "unmap": true, 00:15:58.851 "flush": true, 00:15:58.851 "reset": true, 00:15:58.851 "nvme_admin": true, 00:15:58.851 "nvme_io": true, 00:15:58.851 "nvme_io_md": false, 00:15:58.851 "write_zeroes": true, 00:15:58.851 "zcopy": false, 00:15:58.851 "get_zone_info": false, 00:15:58.851 "zone_management": false, 00:15:58.851 "zone_append": false, 00:15:58.851 "compare": true, 00:15:58.851 "compare_and_write": false, 00:15:58.851 "abort": true, 00:15:58.851 "seek_hole": false, 00:15:58.851 "seek_data": false, 00:15:58.851 "copy": true, 00:15:58.851 "nvme_iov_md": false 00:15:58.851 }, 00:15:58.851 "driver_specific": { 00:15:58.851 "nvme": [ 00:15:58.851 { 00:15:58.851 "pci_address": "0000:00:11.0", 00:15:58.851 "trid": { 00:15:58.851 "trtype": "PCIe", 00:15:58.851 "traddr": "0000:00:11.0" 00:15:58.851 }, 00:15:58.851 "ctrlr_data": { 00:15:58.851 "cntlid": 0, 00:15:58.851 "vendor_id": "0x1b36", 00:15:58.851 "model_number": "QEMU NVMe Ctrl", 00:15:58.851 "serial_number": "12341", 00:15:58.851 "firmware_revision": "8.0.0", 00:15:58.851 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:58.851 "oacs": { 00:15:58.851 "security": 0, 00:15:58.851 "format": 1, 00:15:58.851 "firmware": 0, 00:15:58.851 "ns_manage": 1 00:15:58.851 }, 00:15:58.851 "multi_ctrlr": false, 00:15:58.851 "ana_reporting": false 00:15:58.851 }, 00:15:58.851 "vs": { 00:15:58.851 "nvme_version": "1.4" 00:15:58.851 }, 00:15:58.851 "ns_data": { 00:15:58.851 "id": 1, 00:15:58.851 "can_share": false 00:15:58.851 } 00:15:58.851 } 00:15:58.851 ], 00:15:58.851 "mp_policy": "active_passive" 00:15:58.851 } 00:15:58.851 } 00:15:58.851 ]' 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:15:58.851 23:28:44 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:15:58.851 23:28:44 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:15:58.851 23:28:44 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:58.851 23:28:44 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:15:58.851 23:28:44 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:58.851 23:28:44 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:59.113 23:28:45 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b6d5af07-8312-4f7a-ba72-8654fb2281c7 00:15:59.113 23:28:45 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:15:59.113 23:28:45 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6d5af07-8312-4f7a-ba72-8654fb2281c7 00:15:59.376 23:28:45 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=c6b3f464-96d4-49c7-97c8-3e9d3bd16784 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c6b3f464-96d4-49c7-97c8-3e9d3bd16784 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0700003b-0fb5-463c-a1bc-22546d8df7dd 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0700003b-0fb5-463c-a1bc-22546d8df7dd 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:15:59.637 23:28:45 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:15:59.637 23:28:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0700003b-0fb5-463c-a1bc-22546d8df7dd 00:15:59.637 23:28:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:59.637 23:28:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:15:59.637 23:28:45 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:15:59.899 23:28:45 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:15:59.899 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:59.899 { 00:15:59.899 "name": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:15:59.899 "aliases": [ 00:15:59.899 "lvs/nvme0n1p0" 00:15:59.899 ], 00:15:59.899 "product_name": "Logical Volume", 00:15:59.899 "block_size": 4096, 00:15:59.899 "num_blocks": 26476544, 00:15:59.899 "uuid": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:15:59.899 "assigned_rate_limits": { 00:15:59.899 "rw_ios_per_sec": 0, 00:15:59.899 "rw_mbytes_per_sec": 0, 00:15:59.899 "r_mbytes_per_sec": 0, 00:15:59.899 "w_mbytes_per_sec": 0 00:15:59.899 }, 00:15:59.899 "claimed": false, 00:15:59.899 "zoned": false, 00:15:59.899 "supported_io_types": { 00:15:59.899 "read": true, 00:15:59.899 "write": true, 00:15:59.899 "unmap": true, 00:15:59.899 "flush": false, 00:15:59.899 "reset": true, 00:15:59.899 "nvme_admin": false, 00:15:59.899 "nvme_io": false, 00:15:59.899 "nvme_io_md": false, 00:15:59.899 "write_zeroes": true, 00:15:59.899 "zcopy": false, 00:15:59.899 "get_zone_info": false, 00:15:59.899 "zone_management": false, 00:15:59.899 "zone_append": false, 00:15:59.899 "compare": false, 00:15:59.899 "compare_and_write": false, 00:15:59.899 "abort": false, 00:15:59.899 "seek_hole": true, 00:15:59.899 "seek_data": true, 00:15:59.899 "copy": false, 00:15:59.899 "nvme_iov_md": false 00:15:59.899 }, 00:15:59.899 "driver_specific": { 00:15:59.899 "lvol": { 00:15:59.899 "lvol_store_uuid": "c6b3f464-96d4-49c7-97c8-3e9d3bd16784", 00:15:59.899 "base_bdev": "nvme0n1", 00:15:59.899 "thin_provision": true, 00:15:59.899 "num_allocated_clusters": 0, 00:15:59.899 "snapshot": false, 00:15:59.899 "clone": false, 00:15:59.899 "esnap_clone": false 00:15:59.899 } 00:15:59.899 } 00:15:59.899 } 00:15:59.899 ]' 00:15:59.899 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:59.899 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:15:59.899 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:00.157 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:00.157 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:00.157 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:00.158 23:28:46 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:00.158 23:28:46 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:00.158 23:28:46 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:00.416 23:28:46 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:00.416 23:28:46 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:00.416 23:28:46 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0700003b-0fb5-463c-a1bc-22546d8df7dd 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:00.416 { 00:16:00.416 "name": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:16:00.416 "aliases": [ 00:16:00.416 "lvs/nvme0n1p0" 00:16:00.416 ], 00:16:00.416 "product_name": "Logical Volume", 00:16:00.416 "block_size": 4096, 00:16:00.416 "num_blocks": 26476544, 00:16:00.416 "uuid": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:16:00.416 "assigned_rate_limits": { 00:16:00.416 "rw_ios_per_sec": 0, 00:16:00.416 "rw_mbytes_per_sec": 0, 00:16:00.416 "r_mbytes_per_sec": 0, 00:16:00.416 "w_mbytes_per_sec": 0 00:16:00.416 }, 00:16:00.416 "claimed": false, 00:16:00.416 "zoned": false, 00:16:00.416 "supported_io_types": { 00:16:00.416 "read": true, 00:16:00.416 "write": true, 00:16:00.416 "unmap": true, 00:16:00.416 "flush": false, 00:16:00.416 "reset": true, 00:16:00.416 "nvme_admin": false, 00:16:00.416 "nvme_io": false, 00:16:00.416 "nvme_io_md": false, 00:16:00.416 "write_zeroes": true, 00:16:00.416 "zcopy": false, 00:16:00.416 "get_zone_info": false, 00:16:00.416 "zone_management": false, 00:16:00.416 "zone_append": false, 00:16:00.416 "compare": false, 00:16:00.416 "compare_and_write": false, 00:16:00.416 "abort": false, 00:16:00.416 "seek_hole": true, 00:16:00.416 "seek_data": true, 00:16:00.416 "copy": false, 00:16:00.416 "nvme_iov_md": false 00:16:00.416 }, 00:16:00.416 "driver_specific": { 00:16:00.416 "lvol": { 00:16:00.416 "lvol_store_uuid": "c6b3f464-96d4-49c7-97c8-3e9d3bd16784", 00:16:00.416 "base_bdev": "nvme0n1", 00:16:00.416 "thin_provision": true, 00:16:00.416 "num_allocated_clusters": 0, 00:16:00.416 "snapshot": false, 00:16:00.416 "clone": false, 00:16:00.416 "esnap_clone": false 00:16:00.416 } 00:16:00.416 } 00:16:00.416 } 00:16:00.416 ]' 00:16:00.416 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:00.675 23:28:46 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:00.675 23:28:46 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:00.675 23:28:46 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:00.675 23:28:46 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:00.675 23:28:46 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=0700003b-0fb5-463c-a1bc-22546d8df7dd 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:16:00.675 23:28:46 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0700003b-0fb5-463c-a1bc-22546d8df7dd 00:16:00.933 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:00.933 { 00:16:00.933 "name": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:16:00.933 "aliases": [ 00:16:00.933 "lvs/nvme0n1p0" 00:16:00.933 ], 00:16:00.933 "product_name": "Logical Volume", 00:16:00.933 "block_size": 4096, 00:16:00.933 "num_blocks": 26476544, 00:16:00.933 "uuid": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:16:00.933 "assigned_rate_limits": { 00:16:00.933 "rw_ios_per_sec": 0, 00:16:00.933 "rw_mbytes_per_sec": 0, 00:16:00.933 "r_mbytes_per_sec": 0, 00:16:00.933 "w_mbytes_per_sec": 0 00:16:00.933 }, 00:16:00.933 "claimed": false, 00:16:00.933 "zoned": false, 00:16:00.933 "supported_io_types": { 00:16:00.933 "read": true, 00:16:00.933 "write": true, 00:16:00.933 "unmap": true, 00:16:00.933 "flush": false, 00:16:00.933 "reset": true, 00:16:00.933 "nvme_admin": false, 00:16:00.933 "nvme_io": false, 00:16:00.933 "nvme_io_md": false, 00:16:00.933 "write_zeroes": true, 00:16:00.933 "zcopy": false, 00:16:00.933 "get_zone_info": false, 00:16:00.933 "zone_management": false, 00:16:00.933 "zone_append": false, 00:16:00.933 "compare": false, 00:16:00.933 "compare_and_write": false, 00:16:00.933 "abort": false, 00:16:00.934 "seek_hole": true, 00:16:00.934 "seek_data": true, 00:16:00.934 "copy": false, 00:16:00.934 "nvme_iov_md": false 00:16:00.934 }, 00:16:00.934 "driver_specific": { 00:16:00.934 "lvol": { 00:16:00.934 "lvol_store_uuid": "c6b3f464-96d4-49c7-97c8-3e9d3bd16784", 00:16:00.934 "base_bdev": "nvme0n1", 00:16:00.934 "thin_provision": true, 00:16:00.934 "num_allocated_clusters": 0, 00:16:00.934 "snapshot": false, 00:16:00.934 "clone": false, 00:16:00.934 "esnap_clone": false 00:16:00.934 } 00:16:00.934 } 00:16:00.934 } 00:16:00.934 ]' 00:16:00.934 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:00.934 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:16:00.934 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:00.934 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:00.934 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:00.934 23:28:47 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:16:00.934 23:28:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:00.934 23:28:47 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0700003b-0fb5-463c-a1bc-22546d8df7dd -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:01.193 [2024-11-19 23:28:47.276159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.276197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:01.193 [2024-11-19 23:28:47.276208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:01.193 [2024-11-19 23:28:47.276217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.278072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.278188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:01.193 [2024-11-19 23:28:47.278200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:16:01.193 [2024-11-19 23:28:47.278209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.278279] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:01.193 [2024-11-19 23:28:47.278451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:01.193 [2024-11-19 23:28:47.278471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.278479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:01.193 [2024-11-19 23:28:47.278489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:16:01.193 [2024-11-19 23:28:47.278495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.278719] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:01.193 [2024-11-19 23:28:47.279670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.279709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:01.193 [2024-11-19 23:28:47.279719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:01.193 [2024-11-19 23:28:47.279725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.284837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.284938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:01.193 [2024-11-19 23:28:47.284952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:16:01.193 [2024-11-19 23:28:47.284969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.285059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.285066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:01.193 [2024-11-19 23:28:47.285074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:01.193 [2024-11-19 23:28:47.285081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.285109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.285123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:01.193 [2024-11-19 23:28:47.285131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.193 [2024-11-19 23:28:47.285136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.285164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:01.193 [2024-11-19 23:28:47.286423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.286449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:01.193 [2024-11-19 23:28:47.286456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:16:01.193 [2024-11-19 23:28:47.286465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.193 [2024-11-19 23:28:47.286502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.193 [2024-11-19 23:28:47.286518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:01.193 [2024-11-19 23:28:47.286525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:01.194 [2024-11-19 23:28:47.286533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.194 [2024-11-19 23:28:47.286562] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:01.194 [2024-11-19 23:28:47.286667] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:01.194 [2024-11-19 23:28:47.286676] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:01.194 [2024-11-19 23:28:47.286685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:01.194 [2024-11-19 23:28:47.286693] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:01.194 [2024-11-19 23:28:47.286701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:01.194 [2024-11-19 23:28:47.286707] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:01.194 [2024-11-19 23:28:47.286723] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:01.194 [2024-11-19 23:28:47.286737] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:01.194 [2024-11-19 23:28:47.286746] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:01.194 [2024-11-19 23:28:47.286753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.194 [2024-11-19 23:28:47.286761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:01.194 [2024-11-19 23:28:47.286767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:16:01.194 [2024-11-19 23:28:47.286774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.194 [2024-11-19 23:28:47.286851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.194 [2024-11-19 23:28:47.286861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:01.194 [2024-11-19 23:28:47.286866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:01.194 [2024-11-19 23:28:47.286881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.194 [2024-11-19 23:28:47.286979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:01.194 [2024-11-19 23:28:47.286989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:01.194 [2024-11-19 23:28:47.286995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:01.194 [2024-11-19 23:28:47.287014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:01.194 [2024-11-19 23:28:47.287031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.194 [2024-11-19 23:28:47.287042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:01.194 [2024-11-19 23:28:47.287048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:01.194 [2024-11-19 23:28:47.287054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:01.194 [2024-11-19 23:28:47.287062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:01.194 [2024-11-19 23:28:47.287067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:01.194 [2024-11-19 23:28:47.287075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:01.194 [2024-11-19 23:28:47.287089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:01.194 [2024-11-19 23:28:47.287108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:01.194 [2024-11-19 23:28:47.287128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:01.194 [2024-11-19 23:28:47.287146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:01.194 [2024-11-19 23:28:47.287166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:01.194 [2024-11-19 23:28:47.287184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.194 [2024-11-19 23:28:47.287197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:01.194 [2024-11-19 23:28:47.287204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:01.194 [2024-11-19 23:28:47.287209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:01.194 [2024-11-19 23:28:47.287216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:01.194 [2024-11-19 23:28:47.287222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:01.194 [2024-11-19 23:28:47.287230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:01.194 [2024-11-19 23:28:47.287242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:01.194 [2024-11-19 23:28:47.287248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287254] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:01.194 [2024-11-19 23:28:47.287261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:01.194 [2024-11-19 23:28:47.287270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:01.194 [2024-11-19 23:28:47.287292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:01.194 [2024-11-19 23:28:47.287298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:01.194 [2024-11-19 23:28:47.287306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:01.194 [2024-11-19 23:28:47.287311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:01.194 [2024-11-19 23:28:47.287318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:01.194 [2024-11-19 23:28:47.287324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:01.194 [2024-11-19 23:28:47.287334] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:01.194 [2024-11-19 23:28:47.287342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.194 [2024-11-19 23:28:47.287350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:01.194 [2024-11-19 23:28:47.287356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:01.194 [2024-11-19 23:28:47.287364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:01.194 [2024-11-19 23:28:47.287370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:01.194 [2024-11-19 23:28:47.287377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:01.194 [2024-11-19 23:28:47.287383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:01.194 [2024-11-19 23:28:47.287393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:01.194 [2024-11-19 23:28:47.287400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:01.194 [2024-11-19 23:28:47.287407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:01.194 [2024-11-19 23:28:47.287413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:01.194 [2024-11-19 23:28:47.287420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:01.194 [2024-11-19 23:28:47.287426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:01.195 [2024-11-19 23:28:47.287434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:01.195 [2024-11-19 23:28:47.287440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:01.195 [2024-11-19 23:28:47.287447] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:01.195 [2024-11-19 23:28:47.287454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:01.195 [2024-11-19 23:28:47.287464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:01.195 [2024-11-19 23:28:47.287470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:01.195 [2024-11-19 23:28:47.287478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:01.195 [2024-11-19 23:28:47.287484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:01.195 [2024-11-19 23:28:47.287490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.195 [2024-11-19 23:28:47.287496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:01.195 [2024-11-19 23:28:47.287504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:16:01.195 [2024-11-19 23:28:47.287516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.195 [2024-11-19 23:28:47.287598] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:01.195 [2024-11-19 23:28:47.287605] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:03.740 [2024-11-19 23:28:49.872332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.872408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:03.740 [2024-11-19 23:28:49.872425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2584.715 ms 00:16:03.740 [2024-11-19 23:28:49.872437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.881761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.881800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:03.740 [2024-11-19 23:28:49.881813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.202 ms 00:16:03.740 [2024-11-19 23:28:49.881821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.881964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.881974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:03.740 [2024-11-19 23:28:49.881999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:03.740 [2024-11-19 23:28:49.882009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.902993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.903057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:03.740 [2024-11-19 23:28:49.903082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.933 ms 00:16:03.740 [2024-11-19 23:28:49.903097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.903251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.903291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:03.740 [2024-11-19 23:28:49.903313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:03.740 [2024-11-19 23:28:49.903327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.903814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.903874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:03.740 [2024-11-19 23:28:49.903896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:16:03.740 [2024-11-19 23:28:49.903925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.904180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.904204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:03.740 [2024-11-19 23:28:49.904221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:16:03.740 [2024-11-19 23:28:49.904242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.912337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.740 [2024-11-19 23:28:49.912388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:03.740 [2024-11-19 23:28:49.912408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.031 ms 00:16:03.740 [2024-11-19 23:28:49.912421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.740 [2024-11-19 23:28:49.921450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:04.003 [2024-11-19 23:28:49.938663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:49.938710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:04.003 [2024-11-19 23:28:49.938723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.087 ms 00:16:04.003 [2024-11-19 23:28:49.938751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.040808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.040886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:04.003 [2024-11-19 23:28:50.040902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.965 ms 00:16:04.003 [2024-11-19 23:28:50.040917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.041188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.041204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:04.003 [2024-11-19 23:28:50.041214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:16:04.003 [2024-11-19 23:28:50.041225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.047466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.047523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:04.003 [2024-11-19 23:28:50.047535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:16:04.003 [2024-11-19 23:28:50.047546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.052584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.052635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:04.003 [2024-11-19 23:28:50.052647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.951 ms 00:16:04.003 [2024-11-19 23:28:50.052657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.053060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.053105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:04.003 [2024-11-19 23:28:50.053115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:16:04.003 [2024-11-19 23:28:50.053127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.101918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.102158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:04.003 [2024-11-19 23:28:50.102182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.730 ms 00:16:04.003 [2024-11-19 23:28:50.102199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.109350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.109553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:04.003 [2024-11-19 23:28:50.109590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.004 ms 00:16:04.003 [2024-11-19 23:28:50.109601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.114619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.114674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:04.003 [2024-11-19 23:28:50.114685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:16:04.003 [2024-11-19 23:28:50.114695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.120109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.120306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:04.003 [2024-11-19 23:28:50.120326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:16:04.003 [2024-11-19 23:28:50.120341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.120441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.120453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:04.003 [2024-11-19 23:28:50.120463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:04.003 [2024-11-19 23:28:50.120473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.120593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:04.003 [2024-11-19 23:28:50.120606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:04.003 [2024-11-19 23:28:50.120615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:04.003 [2024-11-19 23:28:50.120625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:04.003 [2024-11-19 23:28:50.121903] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:04.003 [2024-11-19 23:28:50.123291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2845.357 ms, result 0 00:16:04.003 [2024-11-19 23:28:50.124345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:04.003 { 00:16:04.003 "name": "ftl0", 00:16:04.003 "uuid": "0f3b695f-3979-428e-be6f-ba7a91aa0ca7" 00:16:04.003 } 00:16:04.003 23:28:50 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:04.003 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:04.003 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.003 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:16:04.003 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.003 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.003 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:04.265 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:04.528 [ 00:16:04.528 { 00:16:04.528 "name": "ftl0", 00:16:04.528 "aliases": [ 00:16:04.528 "0f3b695f-3979-428e-be6f-ba7a91aa0ca7" 00:16:04.528 ], 00:16:04.528 "product_name": "FTL disk", 00:16:04.528 "block_size": 4096, 00:16:04.528 "num_blocks": 23592960, 00:16:04.528 "uuid": "0f3b695f-3979-428e-be6f-ba7a91aa0ca7", 00:16:04.528 "assigned_rate_limits": { 00:16:04.528 "rw_ios_per_sec": 0, 00:16:04.528 "rw_mbytes_per_sec": 0, 00:16:04.528 "r_mbytes_per_sec": 0, 00:16:04.528 "w_mbytes_per_sec": 0 00:16:04.528 }, 00:16:04.528 "claimed": false, 00:16:04.528 "zoned": false, 00:16:04.528 "supported_io_types": { 00:16:04.528 "read": true, 00:16:04.528 "write": true, 00:16:04.528 "unmap": true, 00:16:04.528 "flush": true, 00:16:04.528 "reset": false, 00:16:04.528 "nvme_admin": false, 00:16:04.528 "nvme_io": false, 00:16:04.528 "nvme_io_md": false, 00:16:04.528 "write_zeroes": true, 00:16:04.528 "zcopy": false, 00:16:04.528 "get_zone_info": false, 00:16:04.528 "zone_management": false, 00:16:04.528 "zone_append": false, 00:16:04.528 "compare": false, 00:16:04.528 "compare_and_write": false, 00:16:04.528 "abort": false, 00:16:04.528 "seek_hole": false, 00:16:04.528 "seek_data": false, 00:16:04.528 "copy": false, 00:16:04.528 "nvme_iov_md": false 00:16:04.528 }, 00:16:04.528 "driver_specific": { 00:16:04.528 "ftl": { 00:16:04.528 "base_bdev": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:16:04.528 "cache": "nvc0n1p0" 00:16:04.528 } 00:16:04.528 } 00:16:04.528 } 00:16:04.528 ] 00:16:04.528 23:28:50 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:16:04.528 23:28:50 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:04.528 23:28:50 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:04.790 23:28:50 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:04.790 23:28:50 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:05.061 23:28:51 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:05.061 { 00:16:05.061 "name": "ftl0", 00:16:05.061 "aliases": [ 00:16:05.061 "0f3b695f-3979-428e-be6f-ba7a91aa0ca7" 00:16:05.061 ], 00:16:05.061 "product_name": "FTL disk", 00:16:05.061 "block_size": 4096, 00:16:05.061 "num_blocks": 23592960, 00:16:05.061 "uuid": "0f3b695f-3979-428e-be6f-ba7a91aa0ca7", 00:16:05.061 "assigned_rate_limits": { 00:16:05.062 "rw_ios_per_sec": 0, 00:16:05.062 "rw_mbytes_per_sec": 0, 00:16:05.062 "r_mbytes_per_sec": 0, 00:16:05.062 "w_mbytes_per_sec": 0 00:16:05.062 }, 00:16:05.062 "claimed": false, 00:16:05.062 "zoned": false, 00:16:05.062 "supported_io_types": { 00:16:05.062 "read": true, 00:16:05.062 "write": true, 00:16:05.062 "unmap": true, 00:16:05.062 "flush": true, 00:16:05.062 "reset": false, 00:16:05.062 "nvme_admin": false, 00:16:05.062 "nvme_io": false, 00:16:05.062 "nvme_io_md": false, 00:16:05.062 "write_zeroes": true, 00:16:05.062 "zcopy": false, 00:16:05.062 "get_zone_info": false, 00:16:05.062 "zone_management": false, 00:16:05.062 "zone_append": false, 00:16:05.062 "compare": false, 00:16:05.062 "compare_and_write": false, 00:16:05.062 "abort": false, 00:16:05.062 "seek_hole": false, 00:16:05.062 "seek_data": false, 00:16:05.062 "copy": false, 00:16:05.062 "nvme_iov_md": false 00:16:05.062 }, 00:16:05.062 "driver_specific": { 00:16:05.062 "ftl": { 00:16:05.062 "base_bdev": "0700003b-0fb5-463c-a1bc-22546d8df7dd", 00:16:05.062 "cache": "nvc0n1p0" 00:16:05.062 } 00:16:05.062 } 00:16:05.062 } 00:16:05.062 ]' 00:16:05.062 23:28:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:05.062 23:28:51 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:05.062 23:28:51 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:05.325 [2024-11-19 23:28:51.253963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.254022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:05.325 [2024-11-19 23:28:51.254041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:05.325 [2024-11-19 23:28:51.254063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.254113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:05.325 [2024-11-19 23:28:51.254948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.254986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:05.325 [2024-11-19 23:28:51.254999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:16:05.325 [2024-11-19 23:28:51.255012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.256101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.256296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:05.325 [2024-11-19 23:28:51.256313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:16:05.325 [2024-11-19 23:28:51.256324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.260205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.260320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:05.325 [2024-11-19 23:28:51.260386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.803 ms 00:16:05.325 [2024-11-19 23:28:51.260417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.267580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.267752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:05.325 [2024-11-19 23:28:51.267825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.058 ms 00:16:05.325 [2024-11-19 23:28:51.267856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.270787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.270953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:05.325 [2024-11-19 23:28:51.271023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:16:05.325 [2024-11-19 23:28:51.271049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.277263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.277440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:05.325 [2024-11-19 23:28:51.277919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:16:05.325 [2024-11-19 23:28:51.277979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.278385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.278469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:05.325 [2024-11-19 23:28:51.278483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:16:05.325 [2024-11-19 23:28:51.278498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.325 [2024-11-19 23:28:51.282193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.325 [2024-11-19 23:28:51.282254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:05.325 [2024-11-19 23:28:51.282264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.649 ms 00:16:05.326 [2024-11-19 23:28:51.282277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.326 [2024-11-19 23:28:51.285088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.326 [2024-11-19 23:28:51.285259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:05.326 [2024-11-19 23:28:51.285276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:16:05.326 [2024-11-19 23:28:51.285285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.326 [2024-11-19 23:28:51.287596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.326 [2024-11-19 23:28:51.287650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:05.326 [2024-11-19 23:28:51.287660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:16:05.326 [2024-11-19 23:28:51.287669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.326 [2024-11-19 23:28:51.290082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.326 [2024-11-19 23:28:51.290133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:05.326 [2024-11-19 23:28:51.290142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.220 ms 00:16:05.326 [2024-11-19 23:28:51.290151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.326 [2024-11-19 23:28:51.290217] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:05.326 [2024-11-19 23:28:51.290235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:05.326 [2024-11-19 23:28:51.290888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.290992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:05.327 [2024-11-19 23:28:51.291196] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:05.327 [2024-11-19 23:28:51.291204] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:05.327 [2024-11-19 23:28:51.291215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:05.327 [2024-11-19 23:28:51.291223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:05.327 [2024-11-19 23:28:51.291232] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:05.327 [2024-11-19 23:28:51.291243] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:05.327 [2024-11-19 23:28:51.291252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:05.327 [2024-11-19 23:28:51.291261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:05.327 [2024-11-19 23:28:51.291270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:05.327 [2024-11-19 23:28:51.291276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:05.327 [2024-11-19 23:28:51.291286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:05.327 [2024-11-19 23:28:51.291294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.327 [2024-11-19 23:28:51.291304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:05.327 [2024-11-19 23:28:51.291313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:16:05.327 [2024-11-19 23:28:51.291342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.294317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.327 [2024-11-19 23:28:51.294453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:05.327 [2024-11-19 23:28:51.294506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:16:05.327 [2024-11-19 23:28:51.294551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.294750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:05.327 [2024-11-19 23:28:51.294809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:05.327 [2024-11-19 23:28:51.294831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:16:05.327 [2024-11-19 23:28:51.294853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.304007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.304172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:05.327 [2024-11-19 23:28:51.304234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.304262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.304399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.304429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:05.327 [2024-11-19 23:28:51.304449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.304474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.304579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.304673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:05.327 [2024-11-19 23:28:51.304698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.304720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.304811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.304837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:05.327 [2024-11-19 23:28:51.304857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.304879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.321715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.321930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:05.327 [2024-11-19 23:28:51.321987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.322014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.335521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.335714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:05.327 [2024-11-19 23:28:51.335899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.335941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.336069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.336099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:05.327 [2024-11-19 23:28:51.336124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.336146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.336256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.336300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:05.327 [2024-11-19 23:28:51.336321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.336412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.336560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.327 [2024-11-19 23:28:51.336596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:05.327 [2024-11-19 23:28:51.336619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.327 [2024-11-19 23:28:51.336658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.327 [2024-11-19 23:28:51.336758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.328 [2024-11-19 23:28:51.336857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:05.328 [2024-11-19 23:28:51.336882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.328 [2024-11-19 23:28:51.336906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.328 [2024-11-19 23:28:51.336992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.328 [2024-11-19 23:28:51.337017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:05.328 [2024-11-19 23:28:51.337037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.328 [2024-11-19 23:28:51.337115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.328 [2024-11-19 23:28:51.337214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:05.328 [2024-11-19 23:28:51.337242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:05.328 [2024-11-19 23:28:51.337264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:05.328 [2024-11-19 23:28:51.337329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:05.328 [2024-11-19 23:28:51.337611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.632 ms, result 0 00:16:05.328 true 00:16:05.328 23:28:51 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85062 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85062 ']' 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85062 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85062 00:16:05.328 killing process with pid 85062 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85062' 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85062 00:16:05.328 23:28:51 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85062 00:16:10.612 23:28:56 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:11.181 65536+0 records in 00:16:11.181 65536+0 records out 00:16:11.181 268435456 bytes (268 MB, 256 MiB) copied, 0.820905 s, 327 MB/s 00:16:11.181 23:28:57 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:11.181 [2024-11-19 23:28:57.301345] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:16:11.181 [2024-11-19 23:28:57.301635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85222 ] 00:16:11.441 [2024-11-19 23:28:57.459432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.441 [2024-11-19 23:28:57.478608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.441 [2024-11-19 23:28:57.570140] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:11.441 [2024-11-19 23:28:57.570208] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:11.703 [2024-11-19 23:28:57.729294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.729520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:11.703 [2024-11-19 23:28:57.729546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:11.703 [2024-11-19 23:28:57.729555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.732230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.732277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.703 [2024-11-19 23:28:57.732288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.648 ms 00:16:11.703 [2024-11-19 23:28:57.732296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.732401] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:11.703 [2024-11-19 23:28:57.732658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:11.703 [2024-11-19 23:28:57.732674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.732684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.703 [2024-11-19 23:28:57.732698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:16:11.703 [2024-11-19 23:28:57.732706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.735163] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:11.703 [2024-11-19 23:28:57.739049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.739221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:11.703 [2024-11-19 23:28:57.739301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.889 ms 00:16:11.703 [2024-11-19 23:28:57.739320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.739390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.739402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:11.703 [2024-11-19 23:28:57.739411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:11.703 [2024-11-19 23:28:57.739423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.747383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.747426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.703 [2024-11-19 23:28:57.747436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.912 ms 00:16:11.703 [2024-11-19 23:28:57.747443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.747585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.747597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.703 [2024-11-19 23:28:57.747606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:11.703 [2024-11-19 23:28:57.747614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.747646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.747655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:11.703 [2024-11-19 23:28:57.747663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:11.703 [2024-11-19 23:28:57.747671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.747693] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:11.703 [2024-11-19 23:28:57.749705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.749764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.703 [2024-11-19 23:28:57.749775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:16:11.703 [2024-11-19 23:28:57.749783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.749832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.703 [2024-11-19 23:28:57.749841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:11.703 [2024-11-19 23:28:57.749855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:11.703 [2024-11-19 23:28:57.749862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.703 [2024-11-19 23:28:57.749881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:11.703 [2024-11-19 23:28:57.749903] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:11.704 [2024-11-19 23:28:57.749940] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:11.704 [2024-11-19 23:28:57.749960] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:11.704 [2024-11-19 23:28:57.750063] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:11.704 [2024-11-19 23:28:57.750075] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:11.704 [2024-11-19 23:28:57.750086] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:11.704 [2024-11-19 23:28:57.750096] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:11.704 [2024-11-19 23:28:57.750120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:11.704 [2024-11-19 23:28:57.750128] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:11.704 [2024-11-19 23:28:57.750137] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:11.704 [2024-11-19 23:28:57.750151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.704 [2024-11-19 23:28:57.750159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:11.704 [2024-11-19 23:28:57.750166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:11.704 [2024-11-19 23:28:57.750176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.704 [2024-11-19 23:28:57.750264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.704 [2024-11-19 23:28:57.750273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:11.704 [2024-11-19 23:28:57.750281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:11.704 [2024-11-19 23:28:57.750288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.704 [2024-11-19 23:28:57.750385] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:11.704 [2024-11-19 23:28:57.750397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:11.704 [2024-11-19 23:28:57.750412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:11.704 [2024-11-19 23:28:57.750444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:11.704 [2024-11-19 23:28:57.750472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.704 [2024-11-19 23:28:57.750487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:11.704 [2024-11-19 23:28:57.750495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:11.704 [2024-11-19 23:28:57.750502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.704 [2024-11-19 23:28:57.750511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:11.704 [2024-11-19 23:28:57.750521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:11.704 [2024-11-19 23:28:57.750529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:11.704 [2024-11-19 23:28:57.750545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:11.704 [2024-11-19 23:28:57.750569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:11.704 [2024-11-19 23:28:57.750599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:11.704 [2024-11-19 23:28:57.750622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:11.704 [2024-11-19 23:28:57.750645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:11.704 [2024-11-19 23:28:57.750669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.704 [2024-11-19 23:28:57.750684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:11.704 [2024-11-19 23:28:57.750692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:11.704 [2024-11-19 23:28:57.750699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.704 [2024-11-19 23:28:57.750707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:11.704 [2024-11-19 23:28:57.750714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:11.704 [2024-11-19 23:28:57.750723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:11.704 [2024-11-19 23:28:57.750757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:11.704 [2024-11-19 23:28:57.750764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:11.704 [2024-11-19 23:28:57.750781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:11.704 [2024-11-19 23:28:57.750789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.704 [2024-11-19 23:28:57.750812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:11.704 [2024-11-19 23:28:57.750821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:11.704 [2024-11-19 23:28:57.750829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:11.704 [2024-11-19 23:28:57.750838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:11.704 [2024-11-19 23:28:57.750846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:11.704 [2024-11-19 23:28:57.750854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:11.704 [2024-11-19 23:28:57.750862] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:11.704 [2024-11-19 23:28:57.750871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.750881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:11.704 [2024-11-19 23:28:57.750889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:11.704 [2024-11-19 23:28:57.750896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:11.704 [2024-11-19 23:28:57.750904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:11.704 [2024-11-19 23:28:57.750911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:11.704 [2024-11-19 23:28:57.750918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:11.704 [2024-11-19 23:28:57.750925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:11.704 [2024-11-19 23:28:57.750932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:11.704 [2024-11-19 23:28:57.750939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:11.704 [2024-11-19 23:28:57.750946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.750953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.750960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.750967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.750974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:11.704 [2024-11-19 23:28:57.750981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:11.704 [2024-11-19 23:28:57.750990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.751002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:11.704 [2024-11-19 23:28:57.751010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:11.704 [2024-11-19 23:28:57.751018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:11.704 [2024-11-19 23:28:57.751024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:11.704 [2024-11-19 23:28:57.751032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.704 [2024-11-19 23:28:57.751039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:11.704 [2024-11-19 23:28:57.751047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:16:11.705 [2024-11-19 23:28:57.751056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.764907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.764953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:11.705 [2024-11-19 23:28:57.764966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.800 ms 00:16:11.705 [2024-11-19 23:28:57.764975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.765109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.765126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:11.705 [2024-11-19 23:28:57.765134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:11.705 [2024-11-19 23:28:57.765142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.784632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.784689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:11.705 [2024-11-19 23:28:57.784702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.465 ms 00:16:11.705 [2024-11-19 23:28:57.784711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.784827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.784843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:11.705 [2024-11-19 23:28:57.784852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:11.705 [2024-11-19 23:28:57.784860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.785376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.785418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:11.705 [2024-11-19 23:28:57.785430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:16:11.705 [2024-11-19 23:28:57.785445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.785597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.785611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:11.705 [2024-11-19 23:28:57.785624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:16:11.705 [2024-11-19 23:28:57.785633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.793798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.793840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:11.705 [2024-11-19 23:28:57.793856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.141 ms 00:16:11.705 [2024-11-19 23:28:57.793865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.797614] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:11.705 [2024-11-19 23:28:57.797672] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:11.705 [2024-11-19 23:28:57.797684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.797693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:11.705 [2024-11-19 23:28:57.797702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:16:11.705 [2024-11-19 23:28:57.797709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.814039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.814083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:11.705 [2024-11-19 23:28:57.814104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.218 ms 00:16:11.705 [2024-11-19 23:28:57.814116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.817135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.817181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:11.705 [2024-11-19 23:28:57.817191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:16:11.705 [2024-11-19 23:28:57.817198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.819767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.819807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:11.705 [2024-11-19 23:28:57.819816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:16:11.705 [2024-11-19 23:28:57.819823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.820183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.820195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:11.705 [2024-11-19 23:28:57.820208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:16:11.705 [2024-11-19 23:28:57.820217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.844840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.844894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:11.705 [2024-11-19 23:28:57.844907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.601 ms 00:16:11.705 [2024-11-19 23:28:57.844916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.853014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:11.705 [2024-11-19 23:28:57.871807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.871865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:11.705 [2024-11-19 23:28:57.871882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.806 ms 00:16:11.705 [2024-11-19 23:28:57.871891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.871979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.872014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:11.705 [2024-11-19 23:28:57.872025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:11.705 [2024-11-19 23:28:57.872033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.872093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.872103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:11.705 [2024-11-19 23:28:57.872112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:11.705 [2024-11-19 23:28:57.872120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.872143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.872152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:11.705 [2024-11-19 23:28:57.872162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:11.705 [2024-11-19 23:28:57.872169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.872207] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:11.705 [2024-11-19 23:28:57.872220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.872228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:11.705 [2024-11-19 23:28:57.872236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:11.705 [2024-11-19 23:28:57.872245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.878031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.878082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:11.705 [2024-11-19 23:28:57.878103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.764 ms 00:16:11.705 [2024-11-19 23:28:57.878110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.878203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.705 [2024-11-19 23:28:57.878217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:11.705 [2024-11-19 23:28:57.878226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:11.705 [2024-11-19 23:28:57.878234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.705 [2024-11-19 23:28:57.879263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:11.705 [2024-11-19 23:28:57.880588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.654 ms, result 0 00:16:11.705 [2024-11-19 23:28:57.881818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:11.966 [2024-11-19 23:28:57.889275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:12.908  [2024-11-19T23:29:00.038Z] Copying: 13/256 [MB] (13 MBps) [2024-11-19T23:29:00.979Z] Copying: 38/256 [MB] (24 MBps) [2024-11-19T23:29:01.923Z] Copying: 60/256 [MB] (22 MBps) [2024-11-19T23:29:03.312Z] Copying: 81/256 [MB] (20 MBps) [2024-11-19T23:29:04.257Z] Copying: 97/256 [MB] (16 MBps) [2024-11-19T23:29:05.200Z] Copying: 112/256 [MB] (14 MBps) [2024-11-19T23:29:06.228Z] Copying: 145/256 [MB] (32 MBps) [2024-11-19T23:29:07.170Z] Copying: 173/256 [MB] (28 MBps) [2024-11-19T23:29:08.111Z] Copying: 191/256 [MB] (17 MBps) [2024-11-19T23:29:09.054Z] Copying: 225/256 [MB] (33 MBps) [2024-11-19T23:29:09.054Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-19 23:29:08.788348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:22.862 [2024-11-19 23:29:08.789382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.862 [2024-11-19 23:29:08.789476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:22.863 [2024-11-19 23:29:08.789533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:22.863 [2024-11-19 23:29:08.789552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.789645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:22.863 [2024-11-19 23:29:08.790051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.790138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:22.863 [2024-11-19 23:29:08.790184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:16:22.863 [2024-11-19 23:29:08.790202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.791713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.791820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:22.863 [2024-11-19 23:29:08.791873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:16:22.863 [2024-11-19 23:29:08.791894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.797471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.797565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:22.863 [2024-11-19 23:29:08.797611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.548 ms 00:16:22.863 [2024-11-19 23:29:08.797628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.803039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.803124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:22.863 [2024-11-19 23:29:08.803165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.377 ms 00:16:22.863 [2024-11-19 23:29:08.803182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.804659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.804758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:22.863 [2024-11-19 23:29:08.804799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:16:22.863 [2024-11-19 23:29:08.804815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.808530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.808617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:22.863 [2024-11-19 23:29:08.808661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:16:22.863 [2024-11-19 23:29:08.808678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.808783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.808803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:22.863 [2024-11-19 23:29:08.808819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:22.863 [2024-11-19 23:29:08.808860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.810769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.810849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:22.863 [2024-11-19 23:29:08.810886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:16:22.863 [2024-11-19 23:29:08.810902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.812319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.812400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:22.863 [2024-11-19 23:29:08.812442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:16:22.863 [2024-11-19 23:29:08.812458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.813433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.813515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:22.863 [2024-11-19 23:29:08.813554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:16:22.863 [2024-11-19 23:29:08.813570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.814640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.863 [2024-11-19 23:29:08.814720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:22.863 [2024-11-19 23:29:08.814773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:16:22.863 [2024-11-19 23:29:08.814789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.863 [2024-11-19 23:29:08.814819] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:22.863 [2024-11-19 23:29:08.814912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.814944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.814966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:22.863 [2024-11-19 23:29:08.815420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:22.864 [2024-11-19 23:29:08.815757] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:22.864 [2024-11-19 23:29:08.815766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:22.864 [2024-11-19 23:29:08.815772] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:22.864 [2024-11-19 23:29:08.815777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:22.864 [2024-11-19 23:29:08.815782] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:22.864 [2024-11-19 23:29:08.815789] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:22.864 [2024-11-19 23:29:08.815795] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:22.864 [2024-11-19 23:29:08.815801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:22.864 [2024-11-19 23:29:08.815807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:22.864 [2024-11-19 23:29:08.815811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:22.864 [2024-11-19 23:29:08.815816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:22.864 [2024-11-19 23:29:08.815822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.865 [2024-11-19 23:29:08.815831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:22.865 [2024-11-19 23:29:08.815837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:16:22.865 [2024-11-19 23:29:08.815843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.817079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.865 [2024-11-19 23:29:08.817092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:22.865 [2024-11-19 23:29:08.817099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:16:22.865 [2024-11-19 23:29:08.817104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.817174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:22.865 [2024-11-19 23:29:08.817180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:22.865 [2024-11-19 23:29:08.817186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:22.865 [2024-11-19 23:29:08.817191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.821634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.821661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:22.865 [2024-11-19 23:29:08.821669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.821674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.821720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.821726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:22.865 [2024-11-19 23:29:08.821745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.821751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.821780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.821790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:22.865 [2024-11-19 23:29:08.821795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.821801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.821814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.821821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:22.865 [2024-11-19 23:29:08.821826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.821832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.829419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.829571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:22.865 [2024-11-19 23:29:08.829581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.829587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.835859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.835983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:22.865 [2024-11-19 23:29:08.836015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.836056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:22.865 [2024-11-19 23:29:08.836062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.836096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:22.865 [2024-11-19 23:29:08.836104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.836173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:22.865 [2024-11-19 23:29:08.836179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.836218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:22.865 [2024-11-19 23:29:08.836224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.836267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:22.865 [2024-11-19 23:29:08.836273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:22.865 [2024-11-19 23:29:08.836318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:22.865 [2024-11-19 23:29:08.836326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:22.865 [2024-11-19 23:29:08.836332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:22.865 [2024-11-19 23:29:08.836434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.033 ms, result 0 00:16:23.132 00:16:23.132 00:16:23.132 23:29:09 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85353 00:16:23.132 23:29:09 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85353 00:16:23.132 23:29:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85353 ']' 00:16:23.133 23:29:09 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:23.133 23:29:09 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.133 23:29:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.133 23:29:09 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.133 23:29:09 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.133 23:29:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:23.400 [2024-11-19 23:29:09.382780] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:16:23.400 [2024-11-19 23:29:09.383261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85353 ] 00:16:23.400 [2024-11-19 23:29:09.537510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.400 [2024-11-19 23:29:09.563122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.343 23:29:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.343 23:29:10 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:24.343 23:29:10 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:24.343 [2024-11-19 23:29:10.436494] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.343 [2024-11-19 23:29:10.436546] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:24.606 [2024-11-19 23:29:10.599756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.606 [2024-11-19 23:29:10.599791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:24.606 [2024-11-19 23:29:10.599801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:24.606 [2024-11-19 23:29:10.599809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.606 [2024-11-19 23:29:10.601544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.606 [2024-11-19 23:29:10.601578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:24.606 [2024-11-19 23:29:10.601586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:16:24.606 [2024-11-19 23:29:10.601593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.606 [2024-11-19 23:29:10.601653] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:24.606 [2024-11-19 23:29:10.601845] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:24.606 [2024-11-19 23:29:10.601856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.606 [2024-11-19 23:29:10.601867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:24.606 [2024-11-19 23:29:10.601874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:24.606 [2024-11-19 23:29:10.601881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.606 [2024-11-19 23:29:10.603002] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:24.606 [2024-11-19 23:29:10.605039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.606 [2024-11-19 23:29:10.605068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:24.606 [2024-11-19 23:29:10.605078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:16:24.606 [2024-11-19 23:29:10.605083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.606 [2024-11-19 23:29:10.605130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.606 [2024-11-19 23:29:10.605137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:24.606 [2024-11-19 23:29:10.605146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:24.607 [2024-11-19 23:29:10.605152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.609576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.609601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:24.607 [2024-11-19 23:29:10.609612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:16:24.607 [2024-11-19 23:29:10.609618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.609693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.609701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:24.607 [2024-11-19 23:29:10.609709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:24.607 [2024-11-19 23:29:10.609717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.609751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.609758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:24.607 [2024-11-19 23:29:10.609767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:24.607 [2024-11-19 23:29:10.609775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.609794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:24.607 [2024-11-19 23:29:10.610941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.610967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:24.607 [2024-11-19 23:29:10.610975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:16:24.607 [2024-11-19 23:29:10.610983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.611008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.611016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:24.607 [2024-11-19 23:29:10.611022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:24.607 [2024-11-19 23:29:10.611029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.611047] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:24.607 [2024-11-19 23:29:10.611062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:24.607 [2024-11-19 23:29:10.611091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:24.607 [2024-11-19 23:29:10.611107] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:24.607 [2024-11-19 23:29:10.611186] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:24.607 [2024-11-19 23:29:10.611196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:24.607 [2024-11-19 23:29:10.611203] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:24.607 [2024-11-19 23:29:10.611212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611219] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611230] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:24.607 [2024-11-19 23:29:10.611235] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:24.607 [2024-11-19 23:29:10.611242] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:24.607 [2024-11-19 23:29:10.611248] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:24.607 [2024-11-19 23:29:10.611256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.611262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:24.607 [2024-11-19 23:29:10.611269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:16:24.607 [2024-11-19 23:29:10.611275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.611342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.607 [2024-11-19 23:29:10.611351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:24.607 [2024-11-19 23:29:10.611358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:24.607 [2024-11-19 23:29:10.611363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.607 [2024-11-19 23:29:10.611439] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:24.607 [2024-11-19 23:29:10.611448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:24.607 [2024-11-19 23:29:10.611460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:24.607 [2024-11-19 23:29:10.611482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:24.607 [2024-11-19 23:29:10.611504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.607 [2024-11-19 23:29:10.611515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:24.607 [2024-11-19 23:29:10.611520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:24.607 [2024-11-19 23:29:10.611526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:24.607 [2024-11-19 23:29:10.611531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:24.607 [2024-11-19 23:29:10.611537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:24.607 [2024-11-19 23:29:10.611542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:24.607 [2024-11-19 23:29:10.611554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:24.607 [2024-11-19 23:29:10.611572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:24.607 [2024-11-19 23:29:10.611588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:24.607 [2024-11-19 23:29:10.611607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:24.607 [2024-11-19 23:29:10.611625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:24.607 [2024-11-19 23:29:10.611647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.607 [2024-11-19 23:29:10.611659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:24.607 [2024-11-19 23:29:10.611665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:24.607 [2024-11-19 23:29:10.611673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:24.607 [2024-11-19 23:29:10.611679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:24.607 [2024-11-19 23:29:10.611686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:24.607 [2024-11-19 23:29:10.611691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:24.607 [2024-11-19 23:29:10.611704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:24.607 [2024-11-19 23:29:10.611711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611716] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:24.607 [2024-11-19 23:29:10.611725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:24.607 [2024-11-19 23:29:10.611746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:24.607 [2024-11-19 23:29:10.611761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:24.607 [2024-11-19 23:29:10.611768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:24.607 [2024-11-19 23:29:10.611774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:24.607 [2024-11-19 23:29:10.611782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:24.607 [2024-11-19 23:29:10.611787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:24.607 [2024-11-19 23:29:10.611796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:24.607 [2024-11-19 23:29:10.611802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:24.607 [2024-11-19 23:29:10.611817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.607 [2024-11-19 23:29:10.611824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:24.607 [2024-11-19 23:29:10.611832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:24.608 [2024-11-19 23:29:10.611838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:24.608 [2024-11-19 23:29:10.611846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:24.608 [2024-11-19 23:29:10.611852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:24.608 [2024-11-19 23:29:10.611860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:24.608 [2024-11-19 23:29:10.611866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:24.608 [2024-11-19 23:29:10.611873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:24.608 [2024-11-19 23:29:10.611879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:24.608 [2024-11-19 23:29:10.611886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:24.608 [2024-11-19 23:29:10.611892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:24.608 [2024-11-19 23:29:10.611900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:24.608 [2024-11-19 23:29:10.611906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:24.608 [2024-11-19 23:29:10.611915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:24.608 [2024-11-19 23:29:10.611921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:24.608 [2024-11-19 23:29:10.611932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:24.608 [2024-11-19 23:29:10.611939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:24.608 [2024-11-19 23:29:10.611947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:24.608 [2024-11-19 23:29:10.611953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:24.608 [2024-11-19 23:29:10.611960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:24.608 [2024-11-19 23:29:10.611967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.611975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:24.608 [2024-11-19 23:29:10.611982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:16:24.608 [2024-11-19 23:29:10.611999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.620119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.620146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:24.608 [2024-11-19 23:29:10.620155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.078 ms 00:16:24.608 [2024-11-19 23:29:10.620162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.620253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.620263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:24.608 [2024-11-19 23:29:10.620270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:24.608 [2024-11-19 23:29:10.620276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.627817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.627844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:24.608 [2024-11-19 23:29:10.627851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.526 ms 00:16:24.608 [2024-11-19 23:29:10.627859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.627895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.627903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:24.608 [2024-11-19 23:29:10.627909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:24.608 [2024-11-19 23:29:10.627916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.628209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.628228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:24.608 [2024-11-19 23:29:10.628236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:24.608 [2024-11-19 23:29:10.628243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.628344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.628359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:24.608 [2024-11-19 23:29:10.628365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:24.608 [2024-11-19 23:29:10.628374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.633126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.633157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:24.608 [2024-11-19 23:29:10.633164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:16:24.608 [2024-11-19 23:29:10.633171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.635224] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:24.608 [2024-11-19 23:29:10.635252] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:24.608 [2024-11-19 23:29:10.635261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.635268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:24.608 [2024-11-19 23:29:10.635275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:16:24.608 [2024-11-19 23:29:10.635282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.650022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.650137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:24.608 [2024-11-19 23:29:10.650150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.688 ms 00:16:24.608 [2024-11-19 23:29:10.650159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.651565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.651595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:24.608 [2024-11-19 23:29:10.651603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:16:24.608 [2024-11-19 23:29:10.651609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.652872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.652902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:24.608 [2024-11-19 23:29:10.652908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:16:24.608 [2024-11-19 23:29:10.652915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.653157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.653173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:24.608 [2024-11-19 23:29:10.653180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:24.608 [2024-11-19 23:29:10.653187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.684530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.684579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:24.608 [2024-11-19 23:29:10.684592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.326 ms 00:16:24.608 [2024-11-19 23:29:10.684605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.691153] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:24.608 [2024-11-19 23:29:10.702800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.702828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:24.608 [2024-11-19 23:29:10.702840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.122 ms 00:16:24.608 [2024-11-19 23:29:10.702846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.702928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.702937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:24.608 [2024-11-19 23:29:10.702946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:24.608 [2024-11-19 23:29:10.702952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.702991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.702999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:24.608 [2024-11-19 23:29:10.703007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:24.608 [2024-11-19 23:29:10.703012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.703031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.703037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:24.608 [2024-11-19 23:29:10.703047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:24.608 [2024-11-19 23:29:10.703054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.608 [2024-11-19 23:29:10.703077] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:24.608 [2024-11-19 23:29:10.703084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.608 [2024-11-19 23:29:10.703091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:24.608 [2024-11-19 23:29:10.703097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:24.608 [2024-11-19 23:29:10.703104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.609 [2024-11-19 23:29:10.706345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.609 [2024-11-19 23:29:10.706453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:24.609 [2024-11-19 23:29:10.706465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:16:24.609 [2024-11-19 23:29:10.706475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.609 [2024-11-19 23:29:10.706530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.609 [2024-11-19 23:29:10.706539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:24.609 [2024-11-19 23:29:10.706545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:24.609 [2024-11-19 23:29:10.706553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.609 [2024-11-19 23:29:10.707197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:24.609 [2024-11-19 23:29:10.708052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.262 ms, result 0 00:16:24.609 [2024-11-19 23:29:10.709015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:24.609 Some configs were skipped because the RPC state that can call them passed over. 00:16:24.609 23:29:10 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:24.870 [2024-11-19 23:29:10.936913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:24.870 [2024-11-19 23:29:10.936947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:24.870 [2024-11-19 23:29:10.936957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.702 ms 00:16:24.870 [2024-11-19 23:29:10.936965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:24.870 [2024-11-19 23:29:10.936991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.782 ms, result 0 00:16:24.870 true 00:16:24.870 23:29:10 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:25.133 [2024-11-19 23:29:11.136607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.136723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:25.133 [2024-11-19 23:29:11.136744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:16:25.133 [2024-11-19 23:29:11.136752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.136780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.428 ms, result 0 00:16:25.133 true 00:16:25.133 23:29:11 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85353 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85353 ']' 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85353 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85353 00:16:25.133 killing process with pid 85353 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85353' 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85353 00:16:25.133 23:29:11 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85353 00:16:25.133 [2024-11-19 23:29:11.264057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.264101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:25.133 [2024-11-19 23:29:11.264112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:25.133 [2024-11-19 23:29:11.264121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.264150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:25.133 [2024-11-19 23:29:11.264552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.264568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:25.133 [2024-11-19 23:29:11.264579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:16:25.133 [2024-11-19 23:29:11.264586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.264813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.264822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:25.133 [2024-11-19 23:29:11.264828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:16:25.133 [2024-11-19 23:29:11.264836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.267845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.267872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:25.133 [2024-11-19 23:29:11.267879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:16:25.133 [2024-11-19 23:29:11.267886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.273061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.273090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:25.133 [2024-11-19 23:29:11.273098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:16:25.133 [2024-11-19 23:29:11.273107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.274658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.274701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:25.133 [2024-11-19 23:29:11.274708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:16:25.133 [2024-11-19 23:29:11.274715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.278494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.278601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:25.133 [2024-11-19 23:29:11.278613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.739 ms 00:16:25.133 [2024-11-19 23:29:11.278622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.278721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.278742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:25.133 [2024-11-19 23:29:11.278749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:25.133 [2024-11-19 23:29:11.278756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.280627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.280659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:25.133 [2024-11-19 23:29:11.280666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:16:25.133 [2024-11-19 23:29:11.280675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.282150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.282180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:25.133 [2024-11-19 23:29:11.282186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:16:25.133 [2024-11-19 23:29:11.282193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.283315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.133 [2024-11-19 23:29:11.283346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:25.133 [2024-11-19 23:29:11.283352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:16:25.133 [2024-11-19 23:29:11.283359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.133 [2024-11-19 23:29:11.284578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.134 [2024-11-19 23:29:11.284609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:25.134 [2024-11-19 23:29:11.284616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:16:25.134 [2024-11-19 23:29:11.284622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.134 [2024-11-19 23:29:11.284648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:25.134 [2024-11-19 23:29:11.284660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.284994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:25.134 [2024-11-19 23:29:11.285084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:25.135 [2024-11-19 23:29:11.285334] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:25.135 [2024-11-19 23:29:11.285340] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:25.135 [2024-11-19 23:29:11.285347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:25.135 [2024-11-19 23:29:11.285354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:25.135 [2024-11-19 23:29:11.285361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:25.135 [2024-11-19 23:29:11.285367] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:25.135 [2024-11-19 23:29:11.285373] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:25.135 [2024-11-19 23:29:11.285382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:25.135 [2024-11-19 23:29:11.285389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:25.135 [2024-11-19 23:29:11.285394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:25.135 [2024-11-19 23:29:11.285400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:25.135 [2024-11-19 23:29:11.285405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.135 [2024-11-19 23:29:11.285412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:25.135 [2024-11-19 23:29:11.285419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:16:25.135 [2024-11-19 23:29:11.285428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.287125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.135 [2024-11-19 23:29:11.287221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:25.135 [2024-11-19 23:29:11.287267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:16:25.135 [2024-11-19 23:29:11.287291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.287378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.135 [2024-11-19 23:29:11.287485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:25.135 [2024-11-19 23:29:11.287504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:25.135 [2024-11-19 23:29:11.287520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.292117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.292217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.135 [2024-11-19 23:29:11.292257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.135 [2024-11-19 23:29:11.292276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.292340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.292393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.135 [2024-11-19 23:29:11.292411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.135 [2024-11-19 23:29:11.292429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.292474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.292494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.135 [2024-11-19 23:29:11.292544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.135 [2024-11-19 23:29:11.292563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.292587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.292605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.135 [2024-11-19 23:29:11.292619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.135 [2024-11-19 23:29:11.292661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.300891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.301011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.135 [2024-11-19 23:29:11.301056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.135 [2024-11-19 23:29:11.301075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.307186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.307299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.135 [2024-11-19 23:29:11.307341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.135 [2024-11-19 23:29:11.307356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.135 [2024-11-19 23:29:11.307389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.135 [2024-11-19 23:29:11.307400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.136 [2024-11-19 23:29:11.307406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.136 [2024-11-19 23:29:11.307413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.136 [2024-11-19 23:29:11.307435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.136 [2024-11-19 23:29:11.307443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.136 [2024-11-19 23:29:11.307449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.136 [2024-11-19 23:29:11.307456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.136 [2024-11-19 23:29:11.307509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.136 [2024-11-19 23:29:11.307518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.136 [2024-11-19 23:29:11.307526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.136 [2024-11-19 23:29:11.307533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.136 [2024-11-19 23:29:11.307561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.136 [2024-11-19 23:29:11.307569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:25.136 [2024-11-19 23:29:11.307575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.136 [2024-11-19 23:29:11.307584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.136 [2024-11-19 23:29:11.307614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.136 [2024-11-19 23:29:11.307621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.136 [2024-11-19 23:29:11.307629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.136 [2024-11-19 23:29:11.307639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.136 [2024-11-19 23:29:11.307674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:25.136 [2024-11-19 23:29:11.307685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.136 [2024-11-19 23:29:11.307691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:25.136 [2024-11-19 23:29:11.307698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.136 [2024-11-19 23:29:11.307816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 43.741 ms, result 0 00:16:25.397 23:29:11 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:25.397 23:29:11 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:25.398 [2024-11-19 23:29:11.522669] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:16:25.398 [2024-11-19 23:29:11.523004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85389 ] 00:16:25.658 [2024-11-19 23:29:11.677167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.658 [2024-11-19 23:29:11.696791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.658 [2024-11-19 23:29:11.779792] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:25.658 [2024-11-19 23:29:11.779945] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:25.922 [2024-11-19 23:29:11.926022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.926057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:25.922 [2024-11-19 23:29:11.926067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:25.922 [2024-11-19 23:29:11.926073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.927807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.927835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:25.922 [2024-11-19 23:29:11.927843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:16:25.922 [2024-11-19 23:29:11.927848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.927904] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:25.922 [2024-11-19 23:29:11.928090] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:25.922 [2024-11-19 23:29:11.928101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.928109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:25.922 [2024-11-19 23:29:11.928117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:25.922 [2024-11-19 23:29:11.928123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.929066] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:25.922 [2024-11-19 23:29:11.931064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.931172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:25.922 [2024-11-19 23:29:11.931184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.999 ms 00:16:25.922 [2024-11-19 23:29:11.931194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.931238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.931246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:25.922 [2024-11-19 23:29:11.931252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:25.922 [2024-11-19 23:29:11.931258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.935598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.935623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:25.922 [2024-11-19 23:29:11.935632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.309 ms 00:16:25.922 [2024-11-19 23:29:11.935637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.935726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.935748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:25.922 [2024-11-19 23:29:11.935755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:25.922 [2024-11-19 23:29:11.935760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.935782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.935789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:25.922 [2024-11-19 23:29:11.935795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:25.922 [2024-11-19 23:29:11.935800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.935815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:25.922 [2024-11-19 23:29:11.936984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.937009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:25.922 [2024-11-19 23:29:11.937018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:16:25.922 [2024-11-19 23:29:11.937026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.937056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.937065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:25.922 [2024-11-19 23:29:11.937073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:25.922 [2024-11-19 23:29:11.937078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.937091] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:25.922 [2024-11-19 23:29:11.937104] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:25.922 [2024-11-19 23:29:11.937130] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:25.922 [2024-11-19 23:29:11.937145] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:25.922 [2024-11-19 23:29:11.937223] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:25.922 [2024-11-19 23:29:11.937233] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:25.922 [2024-11-19 23:29:11.937241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:25.922 [2024-11-19 23:29:11.937248] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:25.922 [2024-11-19 23:29:11.937255] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:25.922 [2024-11-19 23:29:11.937261] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:25.922 [2024-11-19 23:29:11.937266] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:25.922 [2024-11-19 23:29:11.937272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:25.922 [2024-11-19 23:29:11.937276] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:25.922 [2024-11-19 23:29:11.937286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.937293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:25.922 [2024-11-19 23:29:11.937299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:16:25.922 [2024-11-19 23:29:11.937304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.937369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.922 [2024-11-19 23:29:11.937375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:25.922 [2024-11-19 23:29:11.937381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:25.922 [2024-11-19 23:29:11.937386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.922 [2024-11-19 23:29:11.937458] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:25.922 [2024-11-19 23:29:11.937465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:25.922 [2024-11-19 23:29:11.937474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:25.922 [2024-11-19 23:29:11.937484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.922 [2024-11-19 23:29:11.937490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:25.922 [2024-11-19 23:29:11.937495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:25.923 [2024-11-19 23:29:11.937513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:25.923 [2024-11-19 23:29:11.937523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:25.923 [2024-11-19 23:29:11.937528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:25.923 [2024-11-19 23:29:11.937534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:25.923 [2024-11-19 23:29:11.937539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:25.923 [2024-11-19 23:29:11.937544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:25.923 [2024-11-19 23:29:11.937549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:25.923 [2024-11-19 23:29:11.937559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:25.923 [2024-11-19 23:29:11.937574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:25.923 [2024-11-19 23:29:11.937590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:25.923 [2024-11-19 23:29:11.937607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:25.923 [2024-11-19 23:29:11.937622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:25.923 [2024-11-19 23:29:11.937638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:25.923 [2024-11-19 23:29:11.937649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:25.923 [2024-11-19 23:29:11.937654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:25.923 [2024-11-19 23:29:11.937659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:25.923 [2024-11-19 23:29:11.937665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:25.923 [2024-11-19 23:29:11.937671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:25.923 [2024-11-19 23:29:11.937676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:25.923 [2024-11-19 23:29:11.937689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:25.923 [2024-11-19 23:29:11.937695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937700] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:25.923 [2024-11-19 23:29:11.937708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:25.923 [2024-11-19 23:29:11.937714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:25.923 [2024-11-19 23:29:11.937726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:25.923 [2024-11-19 23:29:11.937749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:25.923 [2024-11-19 23:29:11.937755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:25.923 [2024-11-19 23:29:11.937761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:25.923 [2024-11-19 23:29:11.937767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:25.923 [2024-11-19 23:29:11.937773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:25.923 [2024-11-19 23:29:11.937781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:25.923 [2024-11-19 23:29:11.937788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:25.923 [2024-11-19 23:29:11.937804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:25.923 [2024-11-19 23:29:11.937816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:25.923 [2024-11-19 23:29:11.937822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:25.923 [2024-11-19 23:29:11.937828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:25.923 [2024-11-19 23:29:11.937834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:25.923 [2024-11-19 23:29:11.937840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:25.923 [2024-11-19 23:29:11.937846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:25.923 [2024-11-19 23:29:11.937852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:25.923 [2024-11-19 23:29:11.937859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:25.923 [2024-11-19 23:29:11.937891] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:25.923 [2024-11-19 23:29:11.937898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:25.923 [2024-11-19 23:29:11.937914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:25.923 [2024-11-19 23:29:11.937920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:25.923 [2024-11-19 23:29:11.937927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:25.923 [2024-11-19 23:29:11.937933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.923 [2024-11-19 23:29:11.937939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:25.923 [2024-11-19 23:29:11.937946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:16:25.923 [2024-11-19 23:29:11.937953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.923 [2024-11-19 23:29:11.945777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.923 [2024-11-19 23:29:11.945881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:25.923 [2024-11-19 23:29:11.945894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.787 ms 00:16:25.923 [2024-11-19 23:29:11.945899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.923 [2024-11-19 23:29:11.945992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.923 [2024-11-19 23:29:11.946007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:25.923 [2024-11-19 23:29:11.946014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:25.924 [2024-11-19 23:29:11.946020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.961889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.961998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:25.924 [2024-11-19 23:29:11.962012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.851 ms 00:16:25.924 [2024-11-19 23:29:11.962018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.962081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.962090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:25.924 [2024-11-19 23:29:11.962097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:25.924 [2024-11-19 23:29:11.962102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.962394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.962413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:25.924 [2024-11-19 23:29:11.962421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:16:25.924 [2024-11-19 23:29:11.962427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.962529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.962542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:25.924 [2024-11-19 23:29:11.962548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:25.924 [2024-11-19 23:29:11.962554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.967818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.967851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:25.924 [2024-11-19 23:29:11.967863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.248 ms 00:16:25.924 [2024-11-19 23:29:11.967872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.970319] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:25.924 [2024-11-19 23:29:11.970359] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:25.924 [2024-11-19 23:29:11.970373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.970382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:25.924 [2024-11-19 23:29:11.970392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.396 ms 00:16:25.924 [2024-11-19 23:29:11.970400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.983622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.983653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:25.924 [2024-11-19 23:29:11.983661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.174 ms 00:16:25.924 [2024-11-19 23:29:11.983667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.985315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.985418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:25.924 [2024-11-19 23:29:11.985429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:16:25.924 [2024-11-19 23:29:11.985435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.986783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.986805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:25.924 [2024-11-19 23:29:11.986812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:16:25.924 [2024-11-19 23:29:11.986822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:11.987059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:11.987071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:25.924 [2024-11-19 23:29:11.987078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:16:25.924 [2024-11-19 23:29:11.987083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.002166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.002203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:25.924 [2024-11-19 23:29:12.002212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.064 ms 00:16:25.924 [2024-11-19 23:29:12.002219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.008124] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:25.924 [2024-11-19 23:29:12.020117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.020151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:25.924 [2024-11-19 23:29:12.020161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.851 ms 00:16:25.924 [2024-11-19 23:29:12.020167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.020254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.020262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:25.924 [2024-11-19 23:29:12.020273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:25.924 [2024-11-19 23:29:12.020279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.020316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.020326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:25.924 [2024-11-19 23:29:12.020332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:25.924 [2024-11-19 23:29:12.020338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.020353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.020360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:25.924 [2024-11-19 23:29:12.020365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:25.924 [2024-11-19 23:29:12.020373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.020399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:25.924 [2024-11-19 23:29:12.020409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.020415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:25.924 [2024-11-19 23:29:12.020421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:25.924 [2024-11-19 23:29:12.020426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.023273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.023382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:25.924 [2024-11-19 23:29:12.023394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:16:25.924 [2024-11-19 23:29:12.023405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.023462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:25.924 [2024-11-19 23:29:12.023474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:25.924 [2024-11-19 23:29:12.023482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:16:25.924 [2024-11-19 23:29:12.023493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:25.924 [2024-11-19 23:29:12.024143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:25.924 [2024-11-19 23:29:12.024952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.898 ms, result 0 00:16:25.924 [2024-11-19 23:29:12.025586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:25.924 [2024-11-19 23:29:12.033328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:26.869  [2024-11-19T23:29:14.448Z] Copying: 16/256 [MB] (16 MBps) [2024-11-19T23:29:15.390Z] Copying: 29/256 [MB] (12 MBps) [2024-11-19T23:29:16.332Z] Copying: 51/256 [MB] (22 MBps) [2024-11-19T23:29:17.274Z] Copying: 70/256 [MB] (18 MBps) [2024-11-19T23:29:18.217Z] Copying: 88/256 [MB] (18 MBps) [2024-11-19T23:29:19.163Z] Copying: 100/256 [MB] (12 MBps) [2024-11-19T23:29:20.105Z] Copying: 119/256 [MB] (19 MBps) [2024-11-19T23:29:21.047Z] Copying: 139/256 [MB] (19 MBps) [2024-11-19T23:29:22.442Z] Copying: 161/256 [MB] (21 MBps) [2024-11-19T23:29:23.387Z] Copying: 180/256 [MB] (19 MBps) [2024-11-19T23:29:24.329Z] Copying: 202/256 [MB] (21 MBps) [2024-11-19T23:29:25.270Z] Copying: 221/256 [MB] (19 MBps) [2024-11-19T23:29:25.844Z] Copying: 248/256 [MB] (26 MBps) [2024-11-19T23:29:25.844Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-19 23:29:25.531962] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:39.652 [2024-11-19 23:29:25.533919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.533958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:39.652 [2024-11-19 23:29:25.533971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:39.652 [2024-11-19 23:29:25.533980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.534008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:39.652 [2024-11-19 23:29:25.534690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.534725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:39.652 [2024-11-19 23:29:25.534762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:16:39.652 [2024-11-19 23:29:25.534771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.535054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.535067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:39.652 [2024-11-19 23:29:25.535080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:39.652 [2024-11-19 23:29:25.535089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.538869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.538900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:39.652 [2024-11-19 23:29:25.538911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.762 ms 00:16:39.652 [2024-11-19 23:29:25.538920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.545860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.545903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:39.652 [2024-11-19 23:29:25.545915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:16:39.652 [2024-11-19 23:29:25.545938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.548579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.548632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:39.652 [2024-11-19 23:29:25.548642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:16:39.652 [2024-11-19 23:29:25.548650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.554005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.554058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:39.652 [2024-11-19 23:29:25.554068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.309 ms 00:16:39.652 [2024-11-19 23:29:25.554076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.554206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.554216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:39.652 [2024-11-19 23:29:25.554225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:39.652 [2024-11-19 23:29:25.554239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.556808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.556856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:39.652 [2024-11-19 23:29:25.556866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:16:39.652 [2024-11-19 23:29:25.556873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.559372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.652 [2024-11-19 23:29:25.559415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:39.652 [2024-11-19 23:29:25.559425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:16:39.652 [2024-11-19 23:29:25.559432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.652 [2024-11-19 23:29:25.561494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.653 [2024-11-19 23:29:25.561542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:39.653 [2024-11-19 23:29:25.561551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:16:39.653 [2024-11-19 23:29:25.561558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.653 [2024-11-19 23:29:25.563611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.653 [2024-11-19 23:29:25.563779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:39.653 [2024-11-19 23:29:25.563796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:16:39.653 [2024-11-19 23:29:25.563803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.653 [2024-11-19 23:29:25.563896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:39.653 [2024-11-19 23:29:25.563914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.563985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:39.653 [2024-11-19 23:29:25.564554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:39.654 [2024-11-19 23:29:25.564705] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:39.654 [2024-11-19 23:29:25.564713] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:39.654 [2024-11-19 23:29:25.564722] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:39.654 [2024-11-19 23:29:25.564742] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:39.654 [2024-11-19 23:29:25.564750] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:39.654 [2024-11-19 23:29:25.564758] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:39.654 [2024-11-19 23:29:25.564766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:39.654 [2024-11-19 23:29:25.564774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:39.654 [2024-11-19 23:29:25.564785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:39.654 [2024-11-19 23:29:25.564792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:39.654 [2024-11-19 23:29:25.564799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:39.654 [2024-11-19 23:29:25.564807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.654 [2024-11-19 23:29:25.564815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:39.654 [2024-11-19 23:29:25.564824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:16:39.654 [2024-11-19 23:29:25.564831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.567039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.654 [2024-11-19 23:29:25.567270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:39.654 [2024-11-19 23:29:25.567288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.189 ms 00:16:39.654 [2024-11-19 23:29:25.567303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.567438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.654 [2024-11-19 23:29:25.567450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:39.654 [2024-11-19 23:29:25.567461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:39.654 [2024-11-19 23:29:25.567470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.575282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.575330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:39.654 [2024-11-19 23:29:25.575340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.575354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.575415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.575424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:39.654 [2024-11-19 23:29:25.575432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.575440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.575485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.575494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:39.654 [2024-11-19 23:29:25.575502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.575510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.575532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.575540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:39.654 [2024-11-19 23:29:25.575557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.575564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.589286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.589336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:39.654 [2024-11-19 23:29:25.589347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.589362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.600384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.600460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:39.654 [2024-11-19 23:29:25.600472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.600481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.600534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.600553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.654 [2024-11-19 23:29:25.600562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.654 [2024-11-19 23:29:25.600571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.654 [2024-11-19 23:29:25.600604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.654 [2024-11-19 23:29:25.600616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.655 [2024-11-19 23:29:25.600625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.655 [2024-11-19 23:29:25.600634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.655 [2024-11-19 23:29:25.600708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.655 [2024-11-19 23:29:25.600719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.655 [2024-11-19 23:29:25.600752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.655 [2024-11-19 23:29:25.600762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.655 [2024-11-19 23:29:25.600796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.655 [2024-11-19 23:29:25.600810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:39.655 [2024-11-19 23:29:25.600819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.655 [2024-11-19 23:29:25.600827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.655 [2024-11-19 23:29:25.600872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.655 [2024-11-19 23:29:25.600882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.655 [2024-11-19 23:29:25.600894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.655 [2024-11-19 23:29:25.600903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.655 [2024-11-19 23:29:25.600979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:39.655 [2024-11-19 23:29:25.600995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.655 [2024-11-19 23:29:25.601006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:39.655 [2024-11-19 23:29:25.601016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.655 [2024-11-19 23:29:25.601175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.227 ms, result 0 00:16:39.655 00:16:39.655 00:16:39.655 23:29:25 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:39.655 23:29:25 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:40.227 23:29:26 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:40.488 [2024-11-19 23:29:26.453088] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:16:40.488 [2024-11-19 23:29:26.453256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85555 ] 00:16:40.488 [2024-11-19 23:29:26.617348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.488 [2024-11-19 23:29:26.645993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.750 [2024-11-19 23:29:26.759649] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:40.751 [2024-11-19 23:29:26.760056] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:40.751 [2024-11-19 23:29:26.920447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.920506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:40.751 [2024-11-19 23:29:26.920521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:40.751 [2024-11-19 23:29:26.920529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.923114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.923302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.751 [2024-11-19 23:29:26.923323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:16:40.751 [2024-11-19 23:29:26.923331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.923824] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:40.751 [2024-11-19 23:29:26.924164] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:40.751 [2024-11-19 23:29:26.924199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.924211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.751 [2024-11-19 23:29:26.924222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:16:40.751 [2024-11-19 23:29:26.924230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.926581] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:40.751 [2024-11-19 23:29:26.930366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.930415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:40.751 [2024-11-19 23:29:26.930434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:16:40.751 [2024-11-19 23:29:26.930443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.930521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.930532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:40.751 [2024-11-19 23:29:26.930546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:40.751 [2024-11-19 23:29:26.930554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.938591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.938636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.751 [2024-11-19 23:29:26.938648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.981 ms 00:16:40.751 [2024-11-19 23:29:26.938656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.938818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.938831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.751 [2024-11-19 23:29:26.938841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:16:40.751 [2024-11-19 23:29:26.938848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.938882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.751 [2024-11-19 23:29:26.938892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:40.751 [2024-11-19 23:29:26.938900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:40.751 [2024-11-19 23:29:26.938911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.751 [2024-11-19 23:29:26.938934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:41.014 [2024-11-19 23:29:26.940987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.014 [2024-11-19 23:29:26.941153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.014 [2024-11-19 23:29:26.941178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:16:41.014 [2024-11-19 23:29:26.941189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.014 [2024-11-19 23:29:26.941245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.014 [2024-11-19 23:29:26.941254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:41.014 [2024-11-19 23:29:26.941264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:41.014 [2024-11-19 23:29:26.941271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.014 [2024-11-19 23:29:26.941290] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:41.014 [2024-11-19 23:29:26.941309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:41.014 [2024-11-19 23:29:26.941346] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:41.014 [2024-11-19 23:29:26.941371] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:41.014 [2024-11-19 23:29:26.941477] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:41.014 [2024-11-19 23:29:26.941489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:41.014 [2024-11-19 23:29:26.941500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:41.014 [2024-11-19 23:29:26.941512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:41.014 [2024-11-19 23:29:26.941521] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:41.014 [2024-11-19 23:29:26.941533] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:41.014 [2024-11-19 23:29:26.941541] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:41.014 [2024-11-19 23:29:26.941549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:41.014 [2024-11-19 23:29:26.941560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:41.014 [2024-11-19 23:29:26.941573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.014 [2024-11-19 23:29:26.941580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:41.014 [2024-11-19 23:29:26.941589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:16:41.014 [2024-11-19 23:29:26.941596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.014 [2024-11-19 23:29:26.941685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.014 [2024-11-19 23:29:26.941695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:41.014 [2024-11-19 23:29:26.941708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:41.014 [2024-11-19 23:29:26.941717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.014 [2024-11-19 23:29:26.941845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:41.014 [2024-11-19 23:29:26.941856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:41.014 [2024-11-19 23:29:26.941868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.014 [2024-11-19 23:29:26.941884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.014 [2024-11-19 23:29:26.941893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:41.014 [2024-11-19 23:29:26.941901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:41.014 [2024-11-19 23:29:26.941908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:41.014 [2024-11-19 23:29:26.941921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:41.014 [2024-11-19 23:29:26.941931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:41.015 [2024-11-19 23:29:26.941939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.015 [2024-11-19 23:29:26.941947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:41.015 [2024-11-19 23:29:26.941955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:41.015 [2024-11-19 23:29:26.941963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:41.015 [2024-11-19 23:29:26.941971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:41.015 [2024-11-19 23:29:26.941979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:41.015 [2024-11-19 23:29:26.941987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.015 [2024-11-19 23:29:26.941995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:41.015 [2024-11-19 23:29:26.942003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:41.015 [2024-11-19 23:29:26.942029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:41.015 [2024-11-19 23:29:26.942058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:41.015 [2024-11-19 23:29:26.942089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:41.015 [2024-11-19 23:29:26.942109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:41.015 [2024-11-19 23:29:26.942128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.015 [2024-11-19 23:29:26.942143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:41.015 [2024-11-19 23:29:26.942150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:41.015 [2024-11-19 23:29:26.942156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:41.015 [2024-11-19 23:29:26.942163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:41.015 [2024-11-19 23:29:26.942170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:41.015 [2024-11-19 23:29:26.942179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:41.015 [2024-11-19 23:29:26.942193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:41.015 [2024-11-19 23:29:26.942200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942206] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:41.015 [2024-11-19 23:29:26.942214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:41.015 [2024-11-19 23:29:26.942221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:41.015 [2024-11-19 23:29:26.942237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:41.015 [2024-11-19 23:29:26.942243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:41.015 [2024-11-19 23:29:26.942250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:41.015 [2024-11-19 23:29:26.942258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:41.015 [2024-11-19 23:29:26.942264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:41.015 [2024-11-19 23:29:26.942272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:41.015 [2024-11-19 23:29:26.942280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:41.015 [2024-11-19 23:29:26.942290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:41.015 [2024-11-19 23:29:26.942312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:41.015 [2024-11-19 23:29:26.942320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:41.015 [2024-11-19 23:29:26.942330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:41.015 [2024-11-19 23:29:26.942339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:41.015 [2024-11-19 23:29:26.942346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:41.015 [2024-11-19 23:29:26.942355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:41.015 [2024-11-19 23:29:26.942365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:41.015 [2024-11-19 23:29:26.942373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:41.015 [2024-11-19 23:29:26.942381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:41.015 [2024-11-19 23:29:26.942432] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:41.015 [2024-11-19 23:29:26.942441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:41.015 [2024-11-19 23:29:26.942461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:41.015 [2024-11-19 23:29:26.942469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:41.015 [2024-11-19 23:29:26.942477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:41.015 [2024-11-19 23:29:26.942485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.015 [2024-11-19 23:29:26.942493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:41.015 [2024-11-19 23:29:26.942508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:16:41.015 [2024-11-19 23:29:26.942516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.015 [2024-11-19 23:29:26.956281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.015 [2024-11-19 23:29:26.956467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.015 [2024-11-19 23:29:26.956485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.702 ms 00:16:41.015 [2024-11-19 23:29:26.956493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.015 [2024-11-19 23:29:26.956638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.015 [2024-11-19 23:29:26.956652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:41.015 [2024-11-19 23:29:26.956661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:41.015 [2024-11-19 23:29:26.956669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:26.977700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:26.977798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.016 [2024-11-19 23:29:26.977815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.006 ms 00:16:41.016 [2024-11-19 23:29:26.977833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:26.977952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:26.977989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.016 [2024-11-19 23:29:26.978002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:41.016 [2024-11-19 23:29:26.978013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:26.978572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:26.978622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.016 [2024-11-19 23:29:26.978638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:16:41.016 [2024-11-19 23:29:26.978651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:26.978883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:26.978903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.016 [2024-11-19 23:29:26.978923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:16:41.016 [2024-11-19 23:29:26.978934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:26.987622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:26.987668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.016 [2024-11-19 23:29:26.987684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.657 ms 00:16:41.016 [2024-11-19 23:29:26.987692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:26.991770] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:41.016 [2024-11-19 23:29:26.991815] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:41.016 [2024-11-19 23:29:26.991827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:26.991835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:41.016 [2024-11-19 23:29:26.991844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.016 ms 00:16:41.016 [2024-11-19 23:29:26.991851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.007825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.007874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:41.016 [2024-11-19 23:29:27.007887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.893 ms 00:16:41.016 [2024-11-19 23:29:27.007896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.010912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.010957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:41.016 [2024-11-19 23:29:27.010968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:16:41.016 [2024-11-19 23:29:27.010976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.013766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.013810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:41.016 [2024-11-19 23:29:27.013820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.634 ms 00:16:41.016 [2024-11-19 23:29:27.013828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.014174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.014186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:41.016 [2024-11-19 23:29:27.014195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:16:41.016 [2024-11-19 23:29:27.014203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.039349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.039407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:41.016 [2024-11-19 23:29:27.039419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.124 ms 00:16:41.016 [2024-11-19 23:29:27.039429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.047938] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:41.016 [2024-11-19 23:29:27.066703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.066769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:41.016 [2024-11-19 23:29:27.066783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.178 ms 00:16:41.016 [2024-11-19 23:29:27.066800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.066892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.066905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:41.016 [2024-11-19 23:29:27.066919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:41.016 [2024-11-19 23:29:27.066932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.066992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.067002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:41.016 [2024-11-19 23:29:27.067016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:41.016 [2024-11-19 23:29:27.067024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.067052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.067062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:41.016 [2024-11-19 23:29:27.067070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:41.016 [2024-11-19 23:29:27.067077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.067117] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:41.016 [2024-11-19 23:29:27.067128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.067136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:41.016 [2024-11-19 23:29:27.067145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:41.016 [2024-11-19 23:29:27.067153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.073187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.073380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:41.016 [2024-11-19 23:29:27.073399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.012 ms 00:16:41.016 [2024-11-19 23:29:27.073408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.073503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.016 [2024-11-19 23:29:27.073518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:41.016 [2024-11-19 23:29:27.073528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:41.016 [2024-11-19 23:29:27.073535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.016 [2024-11-19 23:29:27.074567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:41.016 [2024-11-19 23:29:27.075895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.803 ms, result 0 00:16:41.016 [2024-11-19 23:29:27.077110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:41.016 [2024-11-19 23:29:27.084527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:41.280  [2024-11-19T23:29:27.472Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-19 23:29:27.341934] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:41.280 [2024-11-19 23:29:27.342960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.343013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:41.280 [2024-11-19 23:29:27.343025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:41.280 [2024-11-19 23:29:27.343033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.280 [2024-11-19 23:29:27.343060] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:41.280 [2024-11-19 23:29:27.343703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.343756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:41.280 [2024-11-19 23:29:27.343768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:16:41.280 [2024-11-19 23:29:27.343776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.280 [2024-11-19 23:29:27.346898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.346944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:41.280 [2024-11-19 23:29:27.346955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.097 ms 00:16:41.280 [2024-11-19 23:29:27.346968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.280 [2024-11-19 23:29:27.351350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.351386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:41.280 [2024-11-19 23:29:27.351396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.365 ms 00:16:41.280 [2024-11-19 23:29:27.351404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.280 [2024-11-19 23:29:27.358379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.358419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:41.280 [2024-11-19 23:29:27.358430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.943 ms 00:16:41.280 [2024-11-19 23:29:27.358451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.280 [2024-11-19 23:29:27.361214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.361264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:41.280 [2024-11-19 23:29:27.361275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:16:41.280 [2024-11-19 23:29:27.361282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.280 [2024-11-19 23:29:27.366563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.280 [2024-11-19 23:29:27.366781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:41.281 [2024-11-19 23:29:27.366800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:16:41.281 [2024-11-19 23:29:27.366808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-11-19 23:29:27.366937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-11-19 23:29:27.366947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:41.281 [2024-11-19 23:29:27.366955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:41.281 [2024-11-19 23:29:27.366968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-11-19 23:29:27.369625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-11-19 23:29:27.369672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:41.281 [2024-11-19 23:29:27.369681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:16:41.281 [2024-11-19 23:29:27.369688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-11-19 23:29:27.371965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-11-19 23:29:27.372022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:41.281 [2024-11-19 23:29:27.372032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:16:41.281 [2024-11-19 23:29:27.372039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-11-19 23:29:27.373994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-11-19 23:29:27.374037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:41.281 [2024-11-19 23:29:27.374046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:16:41.281 [2024-11-19 23:29:27.374052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-11-19 23:29:27.376300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.281 [2024-11-19 23:29:27.376345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:41.281 [2024-11-19 23:29:27.376354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:16:41.281 [2024-11-19 23:29:27.376360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.281 [2024-11-19 23:29:27.376399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:41.281 [2024-11-19 23:29:27.376415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:41.281 [2024-11-19 23:29:27.376933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.376995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:41.282 [2024-11-19 23:29:27.377253] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:41.282 [2024-11-19 23:29:27.377261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:41.282 [2024-11-19 23:29:27.377274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:41.282 [2024-11-19 23:29:27.377288] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:41.282 [2024-11-19 23:29:27.377295] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:41.282 [2024-11-19 23:29:27.377304] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:41.282 [2024-11-19 23:29:27.377311] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:41.282 [2024-11-19 23:29:27.377319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:41.282 [2024-11-19 23:29:27.377329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:41.282 [2024-11-19 23:29:27.377335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:41.282 [2024-11-19 23:29:27.377341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:41.282 [2024-11-19 23:29:27.377348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.282 [2024-11-19 23:29:27.377359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:41.282 [2024-11-19 23:29:27.377367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:16:41.282 [2024-11-19 23:29:27.377375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.379399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.282 [2024-11-19 23:29:27.379428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:41.282 [2024-11-19 23:29:27.379439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:16:41.282 [2024-11-19 23:29:27.379446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.379559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.282 [2024-11-19 23:29:27.379568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:41.282 [2024-11-19 23:29:27.379577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:41.282 [2024-11-19 23:29:27.379584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.387426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.282 [2024-11-19 23:29:27.387475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.282 [2024-11-19 23:29:27.387486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.282 [2024-11-19 23:29:27.387501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.387577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.282 [2024-11-19 23:29:27.387586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.282 [2024-11-19 23:29:27.387594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.282 [2024-11-19 23:29:27.387602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.387649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.282 [2024-11-19 23:29:27.387659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.282 [2024-11-19 23:29:27.387667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.282 [2024-11-19 23:29:27.387680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.387702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.282 [2024-11-19 23:29:27.387713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.282 [2024-11-19 23:29:27.387721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.282 [2024-11-19 23:29:27.387767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.401929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.282 [2024-11-19 23:29:27.401980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.282 [2024-11-19 23:29:27.401992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.282 [2024-11-19 23:29:27.402000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.282 [2024-11-19 23:29:27.412894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.412944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.283 [2024-11-19 23:29:27.412956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.412964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.413056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.283 [2024-11-19 23:29:27.413066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.413074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.413120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.283 [2024-11-19 23:29:27.413128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.413136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.413220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.283 [2024-11-19 23:29:27.413229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.413237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.413279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:41.283 [2024-11-19 23:29:27.413290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.413299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.413351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.283 [2024-11-19 23:29:27.413359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.413372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.283 [2024-11-19 23:29:27.413434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.283 [2024-11-19 23:29:27.413443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.283 [2024-11-19 23:29:27.413450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.283 [2024-11-19 23:29:27.413602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.607 ms, result 0 00:16:41.544 00:16:41.544 00:16:41.544 23:29:27 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85569 00:16:41.544 23:29:27 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85569 00:16:41.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.544 23:29:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 85569 ']' 00:16:41.544 23:29:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.544 23:29:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.544 23:29:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.544 23:29:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.544 23:29:27 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:41.544 23:29:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:41.544 [2024-11-19 23:29:27.719673] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:16:41.544 [2024-11-19 23:29:27.719854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85569 ] 00:16:41.804 [2024-11-19 23:29:27.882134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.804 [2024-11-19 23:29:27.910423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.748 23:29:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.748 23:29:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:16:42.748 23:29:28 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:42.748 [2024-11-19 23:29:28.813375] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.748 [2024-11-19 23:29:28.813640] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:43.026 [2024-11-19 23:29:28.990016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.026 [2024-11-19 23:29:28.990074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:43.026 [2024-11-19 23:29:28.990089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:43.026 [2024-11-19 23:29:28.990099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.026 [2024-11-19 23:29:28.992618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.026 [2024-11-19 23:29:28.992674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.027 [2024-11-19 23:29:28.992685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:16:43.027 [2024-11-19 23:29:28.992695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.027 [2024-11-19 23:29:28.992815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:43.027 [2024-11-19 23:29:28.993093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:43.027 [2024-11-19 23:29:28.993119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.027 [2024-11-19 23:29:28.993129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.027 [2024-11-19 23:29:28.993142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:43.027 [2024-11-19 23:29:28.993152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.027 [2024-11-19 23:29:28.995437] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:43.027 [2024-11-19 23:29:28.999229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.027 [2024-11-19 23:29:28.999280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:43.027 [2024-11-19 23:29:28.999300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.790 ms 00:16:43.027 [2024-11-19 23:29:28.999309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.027 [2024-11-19 23:29:28.999390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.027 [2024-11-19 23:29:28.999402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:43.027 [2024-11-19 23:29:28.999416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:43.027 [2024-11-19 23:29:28.999424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.027 [2024-11-19 23:29:29.007341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.027 [2024-11-19 23:29:29.007382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.027 [2024-11-19 23:29:29.007394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.855 ms 00:16:43.027 [2024-11-19 23:29:29.007401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.027 [2024-11-19 23:29:29.007521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.027 [2024-11-19 23:29:29.007531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.028 [2024-11-19 23:29:29.007547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:43.028 [2024-11-19 23:29:29.007557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.028 [2024-11-19 23:29:29.007586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.028 [2024-11-19 23:29:29.007595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:43.028 [2024-11-19 23:29:29.007608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:43.028 [2024-11-19 23:29:29.007618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.028 [2024-11-19 23:29:29.007650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:43.028 [2024-11-19 23:29:29.009661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.028 [2024-11-19 23:29:29.009708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.028 [2024-11-19 23:29:29.009722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:16:43.028 [2024-11-19 23:29:29.009750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.028 [2024-11-19 23:29:29.009795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.028 [2024-11-19 23:29:29.009806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:43.028 [2024-11-19 23:29:29.009814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:43.028 [2024-11-19 23:29:29.009824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.028 [2024-11-19 23:29:29.009846] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:43.028 [2024-11-19 23:29:29.009869] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:43.028 [2024-11-19 23:29:29.009911] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:43.028 [2024-11-19 23:29:29.009932] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:43.028 [2024-11-19 23:29:29.010038] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:43.028 [2024-11-19 23:29:29.010057] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:43.029 [2024-11-19 23:29:29.010068] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:43.029 [2024-11-19 23:29:29.010082] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:43.029 [2024-11-19 23:29:29.010091] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:43.029 [2024-11-19 23:29:29.010104] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:43.029 [2024-11-19 23:29:29.010112] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:43.029 [2024-11-19 23:29:29.010122] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:43.029 [2024-11-19 23:29:29.010131] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:43.029 [2024-11-19 23:29:29.010142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.029 [2024-11-19 23:29:29.010150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:43.029 [2024-11-19 23:29:29.010159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:16:43.029 [2024-11-19 23:29:29.010170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.029 [2024-11-19 23:29:29.010258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.029 [2024-11-19 23:29:29.010267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:43.029 [2024-11-19 23:29:29.010281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:43.029 [2024-11-19 23:29:29.010288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.029 [2024-11-19 23:29:29.010392] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:43.029 [2024-11-19 23:29:29.010402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:43.029 [2024-11-19 23:29:29.010415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.029 [2024-11-19 23:29:29.010423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.029 [2024-11-19 23:29:29.010437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:43.029 [2024-11-19 23:29:29.010445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:43.029 [2024-11-19 23:29:29.010454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:43.029 [2024-11-19 23:29:29.010469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:43.029 [2024-11-19 23:29:29.010479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:43.029 [2024-11-19 23:29:29.010487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.029 [2024-11-19 23:29:29.010497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:43.029 [2024-11-19 23:29:29.010505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:43.029 [2024-11-19 23:29:29.010514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.029 [2024-11-19 23:29:29.010522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:43.029 [2024-11-19 23:29:29.010531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:43.029 [2024-11-19 23:29:29.010539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.029 [2024-11-19 23:29:29.010549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:43.029 [2024-11-19 23:29:29.010557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:43.029 [2024-11-19 23:29:29.010566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.029 [2024-11-19 23:29:29.010574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:43.029 [2024-11-19 23:29:29.010587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:43.029 [2024-11-19 23:29:29.010596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.029 [2024-11-19 23:29:29.010606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:43.030 [2024-11-19 23:29:29.010613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:43.030 [2024-11-19 23:29:29.010623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.030 [2024-11-19 23:29:29.010631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:43.030 [2024-11-19 23:29:29.010641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:43.030 [2024-11-19 23:29:29.010649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.030 [2024-11-19 23:29:29.010658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:43.030 [2024-11-19 23:29:29.010667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:43.030 [2024-11-19 23:29:29.010676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.030 [2024-11-19 23:29:29.010683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:43.030 [2024-11-19 23:29:29.010692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:43.030 [2024-11-19 23:29:29.010698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.030 [2024-11-19 23:29:29.010706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:43.030 [2024-11-19 23:29:29.010712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:43.030 [2024-11-19 23:29:29.010723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.030 [2024-11-19 23:29:29.010983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:43.030 [2024-11-19 23:29:29.011014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:43.030 [2024-11-19 23:29:29.011034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.030 [2024-11-19 23:29:29.011054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:43.030 [2024-11-19 23:29:29.011074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:43.030 [2024-11-19 23:29:29.011094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.030 [2024-11-19 23:29:29.011113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:43.030 [2024-11-19 23:29:29.011134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:43.030 [2024-11-19 23:29:29.011154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.030 [2024-11-19 23:29:29.011174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.030 [2024-11-19 23:29:29.011194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:43.030 [2024-11-19 23:29:29.011214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:43.030 [2024-11-19 23:29:29.011232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:43.030 [2024-11-19 23:29:29.011311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:43.030 [2024-11-19 23:29:29.011333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:43.030 [2024-11-19 23:29:29.011361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:43.030 [2024-11-19 23:29:29.011382] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:43.030 [2024-11-19 23:29:29.011415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.011446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:43.030 [2024-11-19 23:29:29.011476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:43.030 [2024-11-19 23:29:29.011505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:43.030 [2024-11-19 23:29:29.011535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:43.030 [2024-11-19 23:29:29.011619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:43.030 [2024-11-19 23:29:29.011652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:43.030 [2024-11-19 23:29:29.011681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:43.030 [2024-11-19 23:29:29.011712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:43.030 [2024-11-19 23:29:29.011758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:43.030 [2024-11-19 23:29:29.011789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.011818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.011848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.011916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.011952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:43.030 [2024-11-19 23:29:29.011981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:43.030 [2024-11-19 23:29:29.012030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.012060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:43.030 [2024-11-19 23:29:29.012091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:43.030 [2024-11-19 23:29:29.012162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:43.030 [2024-11-19 23:29:29.012196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:43.030 [2024-11-19 23:29:29.012226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.012248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:43.030 [2024-11-19 23:29:29.012269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:16:43.030 [2024-11-19 23:29:29.012292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.026488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.026653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.030 [2024-11-19 23:29:29.026671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.089 ms 00:16:43.030 [2024-11-19 23:29:29.026682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.026838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.026856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:43.030 [2024-11-19 23:29:29.026869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:43.030 [2024-11-19 23:29:29.026882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.039343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.039503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.030 [2024-11-19 23:29:29.039559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.438 ms 00:16:43.030 [2024-11-19 23:29:29.039585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.039667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.039696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.030 [2024-11-19 23:29:29.039717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.030 [2024-11-19 23:29:29.039779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.040404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.040544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.030 [2024-11-19 23:29:29.040621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:16:43.030 [2024-11-19 23:29:29.040648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.040838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.041096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.030 [2024-11-19 23:29:29.041138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:16:43.030 [2024-11-19 23:29:29.041159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.050342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.050499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.030 [2024-11-19 23:29:29.050554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.137 ms 00:16:43.030 [2024-11-19 23:29:29.050579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.054565] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:43.030 [2024-11-19 23:29:29.054722] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:43.030 [2024-11-19 23:29:29.054762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.054773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:43.030 [2024-11-19 23:29:29.054782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.055 ms 00:16:43.030 [2024-11-19 23:29:29.054792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.070803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.070854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:43.030 [2024-11-19 23:29:29.070868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.945 ms 00:16:43.030 [2024-11-19 23:29:29.070882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.073794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.073952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:43.030 [2024-11-19 23:29:29.073968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:16:43.030 [2024-11-19 23:29:29.073978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.076577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.076626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:43.030 [2024-11-19 23:29:29.076636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.557 ms 00:16:43.030 [2024-11-19 23:29:29.076645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.077017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.077038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:43.030 [2024-11-19 23:29:29.077048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:43.030 [2024-11-19 23:29:29.077058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.113958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.114029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:43.030 [2024-11-19 23:29:29.114045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.874 ms 00:16:43.030 [2024-11-19 23:29:29.114059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.122337] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:43.030 [2024-11-19 23:29:29.141023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.141243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:43.030 [2024-11-19 23:29:29.141268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.857 ms 00:16:43.030 [2024-11-19 23:29:29.141278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.030 [2024-11-19 23:29:29.141376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.030 [2024-11-19 23:29:29.141388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:43.030 [2024-11-19 23:29:29.141403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:43.031 [2024-11-19 23:29:29.141411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.031 [2024-11-19 23:29:29.141466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.031 [2024-11-19 23:29:29.141484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:43.031 [2024-11-19 23:29:29.141495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:43.031 [2024-11-19 23:29:29.141503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.031 [2024-11-19 23:29:29.141529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.031 [2024-11-19 23:29:29.141538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:43.031 [2024-11-19 23:29:29.141551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:43.031 [2024-11-19 23:29:29.141563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.031 [2024-11-19 23:29:29.141601] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:43.031 [2024-11-19 23:29:29.141611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.031 [2024-11-19 23:29:29.141622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:43.031 [2024-11-19 23:29:29.141630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:43.031 [2024-11-19 23:29:29.141640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.031 [2024-11-19 23:29:29.147283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.031 [2024-11-19 23:29:29.147338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:43.031 [2024-11-19 23:29:29.147350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.619 ms 00:16:43.031 [2024-11-19 23:29:29.147364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.031 [2024-11-19 23:29:29.147453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.031 [2024-11-19 23:29:29.147472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:43.031 [2024-11-19 23:29:29.147481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:43.031 [2024-11-19 23:29:29.147492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.031 [2024-11-19 23:29:29.149024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:43.031 [2024-11-19 23:29:29.150373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.673 ms, result 0 00:16:43.031 [2024-11-19 23:29:29.152590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.031 Some configs were skipped because the RPC state that can call them passed over. 00:16:43.031 23:29:29 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:43.298 [2024-11-19 23:29:29.389256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.298 [2024-11-19 23:29:29.389454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:43.298 [2024-11-19 23:29:29.389527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:16:43.298 [2024-11-19 23:29:29.389552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.298 [2024-11-19 23:29:29.389611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.671 ms, result 0 00:16:43.298 true 00:16:43.298 23:29:29 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:43.559 [2024-11-19 23:29:29.606270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.560 [2024-11-19 23:29:29.606450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:43.560 [2024-11-19 23:29:29.606507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.066 ms 00:16:43.560 [2024-11-19 23:29:29.606534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.560 [2024-11-19 23:29:29.606589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.385 ms, result 0 00:16:43.560 true 00:16:43.560 23:29:29 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85569 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85569 ']' 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85569 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85569 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85569' 00:16:43.560 killing process with pid 85569 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 85569 00:16:43.560 23:29:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 85569 00:16:43.823 [2024-11-19 23:29:29.787172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.787238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:43.823 [2024-11-19 23:29:29.787256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.823 [2024-11-19 23:29:29.787265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.787293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:43.823 [2024-11-19 23:29:29.788044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.788080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:43.823 [2024-11-19 23:29:29.788094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:16:43.823 [2024-11-19 23:29:29.788105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.788415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.788436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:43.823 [2024-11-19 23:29:29.788446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:16:43.823 [2024-11-19 23:29:29.788458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.793058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.793113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:43.823 [2024-11-19 23:29:29.793124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:16:43.823 [2024-11-19 23:29:29.793133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.800140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.800187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:43.823 [2024-11-19 23:29:29.800199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:16:43.823 [2024-11-19 23:29:29.800211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.803084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.803139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:43.823 [2024-11-19 23:29:29.803149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:16:43.823 [2024-11-19 23:29:29.803159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.807558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.807612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:43.823 [2024-11-19 23:29:29.807623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.351 ms 00:16:43.823 [2024-11-19 23:29:29.807638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.807793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.807807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:43.823 [2024-11-19 23:29:29.807816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:16:43.823 [2024-11-19 23:29:29.807826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.811337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.811387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:43.823 [2024-11-19 23:29:29.811397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.490 ms 00:16:43.823 [2024-11-19 23:29:29.811413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.814095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.814273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:43.823 [2024-11-19 23:29:29.814290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:16:43.823 [2024-11-19 23:29:29.814300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.816633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.816686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:43.823 [2024-11-19 23:29:29.816696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:16:43.823 [2024-11-19 23:29:29.816706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.818779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.823 [2024-11-19 23:29:29.818827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:43.823 [2024-11-19 23:29:29.818835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:16:43.823 [2024-11-19 23:29:29.818845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.823 [2024-11-19 23:29:29.818886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:43.823 [2024-11-19 23:29:29.818906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.818990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:43.823 [2024-11-19 23:29:29.819109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:43.824 [2024-11-19 23:29:29.819835] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:43.824 [2024-11-19 23:29:29.819843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:16:43.824 [2024-11-19 23:29:29.819859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:43.824 [2024-11-19 23:29:29.819869] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:43.824 [2024-11-19 23:29:29.819879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:43.824 [2024-11-19 23:29:29.819887] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:43.824 [2024-11-19 23:29:29.819896] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:43.824 [2024-11-19 23:29:29.819908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:43.824 [2024-11-19 23:29:29.819917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:43.824 [2024-11-19 23:29:29.819924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:43.824 [2024-11-19 23:29:29.819933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:43.824 [2024-11-19 23:29:29.819941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.824 [2024-11-19 23:29:29.819951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:43.824 [2024-11-19 23:29:29.819959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:16:43.824 [2024-11-19 23:29:29.819971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.822476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.825 [2024-11-19 23:29:29.822618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:43.825 [2024-11-19 23:29:29.822674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.484 ms 00:16:43.825 [2024-11-19 23:29:29.822699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.822873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.825 [2024-11-19 23:29:29.822979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:43.825 [2024-11-19 23:29:29.823015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:43.825 [2024-11-19 23:29:29.823038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.830908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.831065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.825 [2024-11-19 23:29:29.831122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.831148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.831234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.831260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.825 [2024-11-19 23:29:29.831281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.831305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.831370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.831462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.825 [2024-11-19 23:29:29.831488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.831510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.831545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.831569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.825 [2024-11-19 23:29:29.831590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.831611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.845977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.846176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.825 [2024-11-19 23:29:29.846233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.846259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.857152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.857337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.825 [2024-11-19 23:29:29.857395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.857424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.857509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.857540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.825 [2024-11-19 23:29:29.857560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.857582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.857627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.857650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.825 [2024-11-19 23:29:29.857744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.857774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.857882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.857911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.825 [2024-11-19 23:29:29.857939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.857961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.858010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.858036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:43.825 [2024-11-19 23:29:29.858057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.858127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.858197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.858223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.825 [2024-11-19 23:29:29.858247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.858268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.858336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:43.825 [2024-11-19 23:29:29.858363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.825 [2024-11-19 23:29:29.858385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:43.825 [2024-11-19 23:29:29.858406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.825 [2024-11-19 23:29:29.858573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.375 ms, result 0 00:16:44.086 23:29:30 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.086 [2024-11-19 23:29:30.166187] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:16:44.086 [2024-11-19 23:29:30.166513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85611 ] 00:16:44.347 [2024-11-19 23:29:30.329599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.347 [2024-11-19 23:29:30.358105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.347 [2024-11-19 23:29:30.472860] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.347 [2024-11-19 23:29:30.473199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:44.611 [2024-11-19 23:29:30.634223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.634435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:44.611 [2024-11-19 23:29:30.634461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:44.611 [2024-11-19 23:29:30.634470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.637123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.637174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:44.611 [2024-11-19 23:29:30.637188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.614 ms 00:16:44.611 [2024-11-19 23:29:30.637195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.637307] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:44.611 [2024-11-19 23:29:30.637685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:44.611 [2024-11-19 23:29:30.637882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.637899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:44.611 [2024-11-19 23:29:30.637909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:16:44.611 [2024-11-19 23:29:30.637917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.639657] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:44.611 [2024-11-19 23:29:30.643746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.643798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:44.611 [2024-11-19 23:29:30.643813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:16:44.611 [2024-11-19 23:29:30.643822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.643902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.643912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:44.611 [2024-11-19 23:29:30.643928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:16:44.611 [2024-11-19 23:29:30.643936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.652759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.652804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:44.611 [2024-11-19 23:29:30.652817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.776 ms 00:16:44.611 [2024-11-19 23:29:30.652824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.652976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.652989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:44.611 [2024-11-19 23:29:30.652998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:16:44.611 [2024-11-19 23:29:30.653009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.611 [2024-11-19 23:29:30.653040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.611 [2024-11-19 23:29:30.653049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:44.611 [2024-11-19 23:29:30.653058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:44.612 [2024-11-19 23:29:30.653066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.612 [2024-11-19 23:29:30.653087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:44.612 [2024-11-19 23:29:30.655205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.612 [2024-11-19 23:29:30.655243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:44.612 [2024-11-19 23:29:30.655253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:16:44.612 [2024-11-19 23:29:30.655267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.612 [2024-11-19 23:29:30.655315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.612 [2024-11-19 23:29:30.655326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:44.612 [2024-11-19 23:29:30.655335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:44.612 [2024-11-19 23:29:30.655342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.612 [2024-11-19 23:29:30.655362] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:44.612 [2024-11-19 23:29:30.655382] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:44.612 [2024-11-19 23:29:30.655424] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:44.612 [2024-11-19 23:29:30.655444] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:44.612 [2024-11-19 23:29:30.655549] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:44.612 [2024-11-19 23:29:30.655560] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:44.612 [2024-11-19 23:29:30.655571] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:44.612 [2024-11-19 23:29:30.655581] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:44.612 [2024-11-19 23:29:30.655590] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:44.612 [2024-11-19 23:29:30.655599] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:44.612 [2024-11-19 23:29:30.655607] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:44.612 [2024-11-19 23:29:30.655618] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:44.612 [2024-11-19 23:29:30.655628] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:44.612 [2024-11-19 23:29:30.655642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.612 [2024-11-19 23:29:30.655649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:44.612 [2024-11-19 23:29:30.655659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:16:44.612 [2024-11-19 23:29:30.655666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.612 [2024-11-19 23:29:30.655774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.612 [2024-11-19 23:29:30.655784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:44.612 [2024-11-19 23:29:30.655796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:44.612 [2024-11-19 23:29:30.655804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.612 [2024-11-19 23:29:30.655904] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:44.612 [2024-11-19 23:29:30.655914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:44.612 [2024-11-19 23:29:30.655925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.612 [2024-11-19 23:29:30.655943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.655952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:44.612 [2024-11-19 23:29:30.655960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.655967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:44.612 [2024-11-19 23:29:30.655976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:44.612 [2024-11-19 23:29:30.655984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.612 [2024-11-19 23:29:30.656011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:44.612 [2024-11-19 23:29:30.656018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:44.612 [2024-11-19 23:29:30.656025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:44.612 [2024-11-19 23:29:30.656033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:44.612 [2024-11-19 23:29:30.656042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:44.612 [2024-11-19 23:29:30.656050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:44.612 [2024-11-19 23:29:30.656068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:44.612 [2024-11-19 23:29:30.656076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:44.612 [2024-11-19 23:29:30.656093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.612 [2024-11-19 23:29:30.656110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:44.612 [2024-11-19 23:29:30.656122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.612 [2024-11-19 23:29:30.656138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:44.612 [2024-11-19 23:29:30.656146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.612 [2024-11-19 23:29:30.656162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:44.612 [2024-11-19 23:29:30.656170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:44.612 [2024-11-19 23:29:30.656186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:44.612 [2024-11-19 23:29:30.656193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.612 [2024-11-19 23:29:30.656209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:44.612 [2024-11-19 23:29:30.656216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:44.612 [2024-11-19 23:29:30.656226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:44.612 [2024-11-19 23:29:30.656234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:44.612 [2024-11-19 23:29:30.656242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:44.612 [2024-11-19 23:29:30.656252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:44.612 [2024-11-19 23:29:30.656268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:44.612 [2024-11-19 23:29:30.656276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656283] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:44.612 [2024-11-19 23:29:30.656292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:44.612 [2024-11-19 23:29:30.656302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:44.612 [2024-11-19 23:29:30.656310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:44.612 [2024-11-19 23:29:30.656320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:44.612 [2024-11-19 23:29:30.656328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:44.612 [2024-11-19 23:29:30.656336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:44.612 [2024-11-19 23:29:30.656344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:44.612 [2024-11-19 23:29:30.656352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:44.613 [2024-11-19 23:29:30.656359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:44.613 [2024-11-19 23:29:30.656369] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:44.613 [2024-11-19 23:29:30.656384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:44.613 [2024-11-19 23:29:30.656408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:44.613 [2024-11-19 23:29:30.656417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:44.613 [2024-11-19 23:29:30.656425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:44.613 [2024-11-19 23:29:30.656433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:44.613 [2024-11-19 23:29:30.656450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:44.613 [2024-11-19 23:29:30.656457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:44.613 [2024-11-19 23:29:30.656464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:44.613 [2024-11-19 23:29:30.656471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:44.613 [2024-11-19 23:29:30.656478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:44.613 [2024-11-19 23:29:30.656515] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:44.613 [2024-11-19 23:29:30.656523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:44.613 [2024-11-19 23:29:30.656543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:44.613 [2024-11-19 23:29:30.656551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:44.613 [2024-11-19 23:29:30.656558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:44.613 [2024-11-19 23:29:30.656565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.656573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:44.613 [2024-11-19 23:29:30.656581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:16:44.613 [2024-11-19 23:29:30.656589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.671284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.671331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:44.613 [2024-11-19 23:29:30.671343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.643 ms 00:16:44.613 [2024-11-19 23:29:30.671351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.671483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.671501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:44.613 [2024-11-19 23:29:30.671509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:44.613 [2024-11-19 23:29:30.671516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.696502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.696600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:44.613 [2024-11-19 23:29:30.696613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.949 ms 00:16:44.613 [2024-11-19 23:29:30.696622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.696721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.696763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:44.613 [2024-11-19 23:29:30.696772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:44.613 [2024-11-19 23:29:30.696780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.697320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.697343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:44.613 [2024-11-19 23:29:30.697355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:16:44.613 [2024-11-19 23:29:30.697364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.697521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.697531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:44.613 [2024-11-19 23:29:30.697543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:44.613 [2024-11-19 23:29:30.697554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.706162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.706215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:44.613 [2024-11-19 23:29:30.706232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.584 ms 00:16:44.613 [2024-11-19 23:29:30.706239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.710466] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:44.613 [2024-11-19 23:29:30.710516] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:44.613 [2024-11-19 23:29:30.710527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.710535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:44.613 [2024-11-19 23:29:30.710544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.190 ms 00:16:44.613 [2024-11-19 23:29:30.710551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.726465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.726511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:44.613 [2024-11-19 23:29:30.726524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.830 ms 00:16:44.613 [2024-11-19 23:29:30.726532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.729604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.729656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:44.613 [2024-11-19 23:29:30.729667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.948 ms 00:16:44.613 [2024-11-19 23:29:30.729674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.732486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.732539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:44.613 [2024-11-19 23:29:30.732549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.748 ms 00:16:44.613 [2024-11-19 23:29:30.732556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.732936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.732949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:44.613 [2024-11-19 23:29:30.732958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:44.613 [2024-11-19 23:29:30.732966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.758173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.758239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:44.613 [2024-11-19 23:29:30.758251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.183 ms 00:16:44.613 [2024-11-19 23:29:30.758261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.766681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:44.613 [2024-11-19 23:29:30.784618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.784671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:44.613 [2024-11-19 23:29:30.784683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.260 ms 00:16:44.613 [2024-11-19 23:29:30.784691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.784794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.784806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:44.613 [2024-11-19 23:29:30.784816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:44.613 [2024-11-19 23:29:30.784827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.613 [2024-11-19 23:29:30.784881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.613 [2024-11-19 23:29:30.784891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:44.613 [2024-11-19 23:29:30.784900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:44.614 [2024-11-19 23:29:30.784907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.614 [2024-11-19 23:29:30.784930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.614 [2024-11-19 23:29:30.784939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:44.614 [2024-11-19 23:29:30.784947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:44.614 [2024-11-19 23:29:30.784955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.614 [2024-11-19 23:29:30.784992] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:44.614 [2024-11-19 23:29:30.785002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.614 [2024-11-19 23:29:30.785011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:44.614 [2024-11-19 23:29:30.785019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:44.614 [2024-11-19 23:29:30.785032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.614 [2024-11-19 23:29:30.790678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.614 [2024-11-19 23:29:30.790748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:44.614 [2024-11-19 23:29:30.790760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.626 ms 00:16:44.614 [2024-11-19 23:29:30.790768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.614 [2024-11-19 23:29:30.790864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:44.614 [2024-11-19 23:29:30.790875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:44.614 [2024-11-19 23:29:30.790889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:44.614 [2024-11-19 23:29:30.790897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:44.614 [2024-11-19 23:29:30.791840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.614 [2024-11-19 23:29:30.793128] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 157.297 ms, result 0 00:16:44.614 [2024-11-19 23:29:30.794417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:44.877 [2024-11-19 23:29:30.801683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:45.821  [2024-11-19T23:29:33.052Z] Copying: 17/256 [MB] (17 MBps) [2024-11-19T23:29:33.998Z] Copying: 32/256 [MB] (14 MBps) [2024-11-19T23:29:34.942Z] Copying: 53/256 [MB] (21 MBps) [2024-11-19T23:29:35.900Z] Copying: 71/256 [MB] (18 MBps) [2024-11-19T23:29:37.287Z] Copying: 86/256 [MB] (14 MBps) [2024-11-19T23:29:37.859Z] Copying: 96/256 [MB] (10 MBps) [2024-11-19T23:29:39.250Z] Copying: 106/256 [MB] (10 MBps) [2024-11-19T23:29:40.193Z] Copying: 117/256 [MB] (10 MBps) [2024-11-19T23:29:41.137Z] Copying: 130/256 [MB] (13 MBps) [2024-11-19T23:29:42.086Z] Copying: 143/256 [MB] (13 MBps) [2024-11-19T23:29:43.030Z] Copying: 154/256 [MB] (10 MBps) [2024-11-19T23:29:43.974Z] Copying: 171/256 [MB] (17 MBps) [2024-11-19T23:29:44.917Z] Copying: 187/256 [MB] (16 MBps) [2024-11-19T23:29:45.859Z] Copying: 209/256 [MB] (21 MBps) [2024-11-19T23:29:47.244Z] Copying: 220/256 [MB] (10 MBps) [2024-11-19T23:29:48.186Z] Copying: 231/256 [MB] (10 MBps) [2024-11-19T23:29:48.448Z] Copying: 248/256 [MB] (17 MBps) [2024-11-19T23:29:49.021Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-19 23:29:48.803983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.829 [2024-11-19 23:29:48.806054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.806113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:02.829 [2024-11-19 23:29:48.806128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:02.829 [2024-11-19 23:29:48.806137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.806162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:02.829 [2024-11-19 23:29:48.806882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.806916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:02.829 [2024-11-19 23:29:48.806930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:17:02.829 [2024-11-19 23:29:48.806940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.807237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.807257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:02.829 [2024-11-19 23:29:48.807267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:02.829 [2024-11-19 23:29:48.807279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.812212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.812247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:02.829 [2024-11-19 23:29:48.812259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.914 ms 00:17:02.829 [2024-11-19 23:29:48.812275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.819705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.819770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:02.829 [2024-11-19 23:29:48.819782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.402 ms 00:17:02.829 [2024-11-19 23:29:48.819802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.822067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.822115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:02.829 [2024-11-19 23:29:48.822127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:17:02.829 [2024-11-19 23:29:48.822135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.828195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.828253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:02.829 [2024-11-19 23:29:48.828264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.009 ms 00:17:02.829 [2024-11-19 23:29:48.828282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.829115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.829167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:02.829 [2024-11-19 23:29:48.829180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:02.829 [2024-11-19 23:29:48.829197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.832967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.833020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:02.829 [2024-11-19 23:29:48.833031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.748 ms 00:17:02.829 [2024-11-19 23:29:48.833039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.836117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.836166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:02.829 [2024-11-19 23:29:48.836182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:17:02.829 [2024-11-19 23:29:48.836189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.838504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.838552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:02.829 [2024-11-19 23:29:48.838563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:17:02.829 [2024-11-19 23:29:48.838571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.841230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.829 [2024-11-19 23:29:48.841306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:02.829 [2024-11-19 23:29:48.841326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.576 ms 00:17:02.829 [2024-11-19 23:29:48.841339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.829 [2024-11-19 23:29:48.841409] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:02.829 [2024-11-19 23:29:48.841436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:02.829 [2024-11-19 23:29:48.841990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.841997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:02.830 [2024-11-19 23:29:48.842403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:02.830 [2024-11-19 23:29:48.842412] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0f3b695f-3979-428e-be6f-ba7a91aa0ca7 00:17:02.830 [2024-11-19 23:29:48.842421] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:02.830 [2024-11-19 23:29:48.842429] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:02.830 [2024-11-19 23:29:48.842437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:02.830 [2024-11-19 23:29:48.842446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:02.830 [2024-11-19 23:29:48.842453] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:02.830 [2024-11-19 23:29:48.842462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:02.830 [2024-11-19 23:29:48.842470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:02.830 [2024-11-19 23:29:48.842477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:02.830 [2024-11-19 23:29:48.842484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:02.830 [2024-11-19 23:29:48.842491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.830 [2024-11-19 23:29:48.842510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:02.830 [2024-11-19 23:29:48.842520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:17:02.830 [2024-11-19 23:29:48.842528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.844912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.830 [2024-11-19 23:29:48.844954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:02.830 [2024-11-19 23:29:48.844970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.361 ms 00:17:02.830 [2024-11-19 23:29:48.844979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.845098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.830 [2024-11-19 23:29:48.845107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:02.830 [2024-11-19 23:29:48.845116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:02.830 [2024-11-19 23:29:48.845124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.853951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.830 [2024-11-19 23:29:48.854004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:02.830 [2024-11-19 23:29:48.854016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.830 [2024-11-19 23:29:48.854024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.854124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.830 [2024-11-19 23:29:48.854136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:02.830 [2024-11-19 23:29:48.854145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.830 [2024-11-19 23:29:48.854154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.854203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.830 [2024-11-19 23:29:48.854213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:02.830 [2024-11-19 23:29:48.854223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.830 [2024-11-19 23:29:48.854230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.854252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.830 [2024-11-19 23:29:48.854261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:02.830 [2024-11-19 23:29:48.854269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.830 [2024-11-19 23:29:48.854277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.867756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.830 [2024-11-19 23:29:48.867809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:02.830 [2024-11-19 23:29:48.867821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.830 [2024-11-19 23:29:48.867829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.830 [2024-11-19 23:29:48.878402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.830 [2024-11-19 23:29:48.878452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:02.830 [2024-11-19 23:29:48.878463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.830 [2024-11-19 23:29:48.878471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.878521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.831 [2024-11-19 23:29:48.878539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:02.831 [2024-11-19 23:29:48.878548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.831 [2024-11-19 23:29:48.878557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.878588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.831 [2024-11-19 23:29:48.878599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:02.831 [2024-11-19 23:29:48.878608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.831 [2024-11-19 23:29:48.878620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.878689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.831 [2024-11-19 23:29:48.878699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:02.831 [2024-11-19 23:29:48.878708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.831 [2024-11-19 23:29:48.878715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.878764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.831 [2024-11-19 23:29:48.878774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:02.831 [2024-11-19 23:29:48.878785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.831 [2024-11-19 23:29:48.878793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.878834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.831 [2024-11-19 23:29:48.878843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:02.831 [2024-11-19 23:29:48.878851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.831 [2024-11-19 23:29:48.878859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.878909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:02.831 [2024-11-19 23:29:48.878920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:02.831 [2024-11-19 23:29:48.878931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:02.831 [2024-11-19 23:29:48.878939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.831 [2024-11-19 23:29:48.879092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.008 ms, result 0 00:17:03.091 00:17:03.091 00:17:03.091 23:29:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:03.662 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:03.662 23:29:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85569 00:17:03.662 23:29:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 85569 ']' 00:17:03.662 23:29:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 85569 00:17:03.662 Process with pid 85569 is not found 00:17:03.662 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85569) - No such process 00:17:03.662 23:29:49 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 85569 is not found' 00:17:03.662 00:17:03.662 real 1m6.526s 00:17:03.662 user 1m30.900s 00:17:03.662 sys 0m5.286s 00:17:03.662 23:29:49 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.662 ************************************ 00:17:03.662 END TEST ftl_trim 00:17:03.662 ************************************ 00:17:03.662 23:29:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:03.662 23:29:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:03.662 23:29:49 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:03.662 23:29:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.662 23:29:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:03.662 ************************************ 00:17:03.662 START TEST ftl_restore 00:17:03.662 ************************************ 00:17:03.662 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:03.923 * Looking for test storage... 00:17:03.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.923 23:29:49 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.923 --rc genhtml_branch_coverage=1 00:17:03.923 --rc genhtml_function_coverage=1 00:17:03.923 --rc genhtml_legend=1 00:17:03.923 --rc geninfo_all_blocks=1 00:17:03.923 --rc geninfo_unexecuted_blocks=1 00:17:03.923 00:17:03.923 ' 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.923 --rc genhtml_branch_coverage=1 00:17:03.923 --rc genhtml_function_coverage=1 00:17:03.923 --rc genhtml_legend=1 00:17:03.923 --rc geninfo_all_blocks=1 00:17:03.923 --rc geninfo_unexecuted_blocks=1 00:17:03.923 00:17:03.923 ' 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.923 --rc genhtml_branch_coverage=1 00:17:03.923 --rc genhtml_function_coverage=1 00:17:03.923 --rc genhtml_legend=1 00:17:03.923 --rc geninfo_all_blocks=1 00:17:03.923 --rc geninfo_unexecuted_blocks=1 00:17:03.923 00:17:03.923 ' 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.923 --rc genhtml_branch_coverage=1 00:17:03.923 --rc genhtml_function_coverage=1 00:17:03.923 --rc genhtml_legend=1 00:17:03.923 --rc geninfo_all_blocks=1 00:17:03.923 --rc geninfo_unexecuted_blocks=1 00:17:03.923 00:17:03.923 ' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.fOX097io1O 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=85887 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 85887 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 85887 ']' 00:17:03.923 23:29:49 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.923 23:29:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:03.923 [2024-11-19 23:29:50.071002] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:17:03.923 [2024-11-19 23:29:50.071427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85887 ] 00:17:04.182 [2024-11-19 23:29:50.238432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.182 [2024-11-19 23:29:50.267954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.753 23:29:50 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.753 23:29:50 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:17:04.753 23:29:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:04.753 23:29:50 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:04.753 23:29:50 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:04.753 23:29:50 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:04.753 23:29:50 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:04.753 23:29:50 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:05.325 23:29:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:05.325 23:29:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:05.325 23:29:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:05.325 { 00:17:05.325 "name": "nvme0n1", 00:17:05.325 "aliases": [ 00:17:05.325 "fe0790cc-a1a8-48af-b02d-dd10cf0527ad" 00:17:05.325 ], 00:17:05.325 "product_name": "NVMe disk", 00:17:05.325 "block_size": 4096, 00:17:05.325 "num_blocks": 1310720, 00:17:05.325 "uuid": "fe0790cc-a1a8-48af-b02d-dd10cf0527ad", 00:17:05.325 "numa_id": -1, 00:17:05.325 "assigned_rate_limits": { 00:17:05.325 "rw_ios_per_sec": 0, 00:17:05.325 "rw_mbytes_per_sec": 0, 00:17:05.325 "r_mbytes_per_sec": 0, 00:17:05.325 "w_mbytes_per_sec": 0 00:17:05.325 }, 00:17:05.325 "claimed": true, 00:17:05.325 "claim_type": "read_many_write_one", 00:17:05.325 "zoned": false, 00:17:05.325 "supported_io_types": { 00:17:05.325 "read": true, 00:17:05.325 "write": true, 00:17:05.325 "unmap": true, 00:17:05.325 "flush": true, 00:17:05.325 "reset": true, 00:17:05.325 "nvme_admin": true, 00:17:05.325 "nvme_io": true, 00:17:05.325 "nvme_io_md": false, 00:17:05.325 "write_zeroes": true, 00:17:05.325 "zcopy": false, 00:17:05.325 "get_zone_info": false, 00:17:05.325 "zone_management": false, 00:17:05.325 "zone_append": false, 00:17:05.325 "compare": true, 00:17:05.325 "compare_and_write": false, 00:17:05.325 "abort": true, 00:17:05.325 "seek_hole": false, 00:17:05.325 "seek_data": false, 00:17:05.325 "copy": true, 00:17:05.325 "nvme_iov_md": false 00:17:05.325 }, 00:17:05.325 "driver_specific": { 00:17:05.325 "nvme": [ 00:17:05.325 { 00:17:05.325 "pci_address": "0000:00:11.0", 00:17:05.325 "trid": { 00:17:05.325 "trtype": "PCIe", 00:17:05.325 "traddr": "0000:00:11.0" 00:17:05.325 }, 00:17:05.325 "ctrlr_data": { 00:17:05.325 "cntlid": 0, 00:17:05.325 "vendor_id": "0x1b36", 00:17:05.325 "model_number": "QEMU NVMe Ctrl", 00:17:05.325 "serial_number": "12341", 00:17:05.325 "firmware_revision": "8.0.0", 00:17:05.325 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:05.325 "oacs": { 00:17:05.325 "security": 0, 00:17:05.325 "format": 1, 00:17:05.325 "firmware": 0, 00:17:05.325 "ns_manage": 1 00:17:05.325 }, 00:17:05.325 "multi_ctrlr": false, 00:17:05.325 "ana_reporting": false 00:17:05.325 }, 00:17:05.325 "vs": { 00:17:05.325 "nvme_version": "1.4" 00:17:05.325 }, 00:17:05.325 "ns_data": { 00:17:05.325 "id": 1, 00:17:05.325 "can_share": false 00:17:05.325 } 00:17:05.325 } 00:17:05.325 ], 00:17:05.325 "mp_policy": "active_passive" 00:17:05.325 } 00:17:05.325 } 00:17:05.325 ]' 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:05.325 23:29:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=c6b3f464-96d4-49c7-97c8-3e9d3bd16784 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:05.585 23:29:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6b3f464-96d4-49c7-97c8-3e9d3bd16784 00:17:05.845 23:29:51 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:06.105 23:29:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=23b36f27-eb54-4f10-b337-a75cbb16fb17 00:17:06.106 23:29:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 23b36f27-eb54-4f10-b337-a75cbb16fb17 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:06.366 23:29:52 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.366 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.366 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:06.366 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:06.366 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:06.366 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.625 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:06.625 { 00:17:06.625 "name": "2e5c2493-8461-465e-9fd3-10d2eef904f5", 00:17:06.625 "aliases": [ 00:17:06.625 "lvs/nvme0n1p0" 00:17:06.625 ], 00:17:06.625 "product_name": "Logical Volume", 00:17:06.625 "block_size": 4096, 00:17:06.625 "num_blocks": 26476544, 00:17:06.626 "uuid": "2e5c2493-8461-465e-9fd3-10d2eef904f5", 00:17:06.626 "assigned_rate_limits": { 00:17:06.626 "rw_ios_per_sec": 0, 00:17:06.626 "rw_mbytes_per_sec": 0, 00:17:06.626 "r_mbytes_per_sec": 0, 00:17:06.626 "w_mbytes_per_sec": 0 00:17:06.626 }, 00:17:06.626 "claimed": false, 00:17:06.626 "zoned": false, 00:17:06.626 "supported_io_types": { 00:17:06.626 "read": true, 00:17:06.626 "write": true, 00:17:06.626 "unmap": true, 00:17:06.626 "flush": false, 00:17:06.626 "reset": true, 00:17:06.626 "nvme_admin": false, 00:17:06.626 "nvme_io": false, 00:17:06.626 "nvme_io_md": false, 00:17:06.626 "write_zeroes": true, 00:17:06.626 "zcopy": false, 00:17:06.626 "get_zone_info": false, 00:17:06.626 "zone_management": false, 00:17:06.626 "zone_append": false, 00:17:06.626 "compare": false, 00:17:06.626 "compare_and_write": false, 00:17:06.626 "abort": false, 00:17:06.626 "seek_hole": true, 00:17:06.626 "seek_data": true, 00:17:06.626 "copy": false, 00:17:06.626 "nvme_iov_md": false 00:17:06.626 }, 00:17:06.626 "driver_specific": { 00:17:06.626 "lvol": { 00:17:06.626 "lvol_store_uuid": "23b36f27-eb54-4f10-b337-a75cbb16fb17", 00:17:06.626 "base_bdev": "nvme0n1", 00:17:06.626 "thin_provision": true, 00:17:06.626 "num_allocated_clusters": 0, 00:17:06.626 "snapshot": false, 00:17:06.626 "clone": false, 00:17:06.626 "esnap_clone": false 00:17:06.626 } 00:17:06.626 } 00:17:06.626 } 00:17:06.626 ]' 00:17:06.626 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:06.626 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:06.626 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:06.626 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:06.626 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:06.626 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:06.626 23:29:52 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:06.626 23:29:52 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:06.626 23:29:52 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:06.886 23:29:52 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:06.886 23:29:52 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:06.886 23:29:52 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.886 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:06.886 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:06.886 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:06.886 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:06.886 23:29:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:07.146 { 00:17:07.146 "name": "2e5c2493-8461-465e-9fd3-10d2eef904f5", 00:17:07.146 "aliases": [ 00:17:07.146 "lvs/nvme0n1p0" 00:17:07.146 ], 00:17:07.146 "product_name": "Logical Volume", 00:17:07.146 "block_size": 4096, 00:17:07.146 "num_blocks": 26476544, 00:17:07.146 "uuid": "2e5c2493-8461-465e-9fd3-10d2eef904f5", 00:17:07.146 "assigned_rate_limits": { 00:17:07.146 "rw_ios_per_sec": 0, 00:17:07.146 "rw_mbytes_per_sec": 0, 00:17:07.146 "r_mbytes_per_sec": 0, 00:17:07.146 "w_mbytes_per_sec": 0 00:17:07.146 }, 00:17:07.146 "claimed": false, 00:17:07.146 "zoned": false, 00:17:07.146 "supported_io_types": { 00:17:07.146 "read": true, 00:17:07.146 "write": true, 00:17:07.146 "unmap": true, 00:17:07.146 "flush": false, 00:17:07.146 "reset": true, 00:17:07.146 "nvme_admin": false, 00:17:07.146 "nvme_io": false, 00:17:07.146 "nvme_io_md": false, 00:17:07.146 "write_zeroes": true, 00:17:07.146 "zcopy": false, 00:17:07.146 "get_zone_info": false, 00:17:07.146 "zone_management": false, 00:17:07.146 "zone_append": false, 00:17:07.146 "compare": false, 00:17:07.146 "compare_and_write": false, 00:17:07.146 "abort": false, 00:17:07.146 "seek_hole": true, 00:17:07.146 "seek_data": true, 00:17:07.146 "copy": false, 00:17:07.146 "nvme_iov_md": false 00:17:07.146 }, 00:17:07.146 "driver_specific": { 00:17:07.146 "lvol": { 00:17:07.146 "lvol_store_uuid": "23b36f27-eb54-4f10-b337-a75cbb16fb17", 00:17:07.146 "base_bdev": "nvme0n1", 00:17:07.146 "thin_provision": true, 00:17:07.146 "num_allocated_clusters": 0, 00:17:07.146 "snapshot": false, 00:17:07.146 "clone": false, 00:17:07.146 "esnap_clone": false 00:17:07.146 } 00:17:07.146 } 00:17:07.146 } 00:17:07.146 ]' 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:07.146 23:29:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:07.146 23:29:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:07.146 23:29:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:07.146 23:29:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:17:07.146 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e5c2493-8461-465e-9fd3-10d2eef904f5 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:07.405 { 00:17:07.405 "name": "2e5c2493-8461-465e-9fd3-10d2eef904f5", 00:17:07.405 "aliases": [ 00:17:07.405 "lvs/nvme0n1p0" 00:17:07.405 ], 00:17:07.405 "product_name": "Logical Volume", 00:17:07.405 "block_size": 4096, 00:17:07.405 "num_blocks": 26476544, 00:17:07.405 "uuid": "2e5c2493-8461-465e-9fd3-10d2eef904f5", 00:17:07.405 "assigned_rate_limits": { 00:17:07.405 "rw_ios_per_sec": 0, 00:17:07.405 "rw_mbytes_per_sec": 0, 00:17:07.405 "r_mbytes_per_sec": 0, 00:17:07.405 "w_mbytes_per_sec": 0 00:17:07.405 }, 00:17:07.405 "claimed": false, 00:17:07.405 "zoned": false, 00:17:07.405 "supported_io_types": { 00:17:07.405 "read": true, 00:17:07.405 "write": true, 00:17:07.405 "unmap": true, 00:17:07.405 "flush": false, 00:17:07.405 "reset": true, 00:17:07.405 "nvme_admin": false, 00:17:07.405 "nvme_io": false, 00:17:07.405 "nvme_io_md": false, 00:17:07.405 "write_zeroes": true, 00:17:07.405 "zcopy": false, 00:17:07.405 "get_zone_info": false, 00:17:07.405 "zone_management": false, 00:17:07.405 "zone_append": false, 00:17:07.405 "compare": false, 00:17:07.405 "compare_and_write": false, 00:17:07.405 "abort": false, 00:17:07.405 "seek_hole": true, 00:17:07.405 "seek_data": true, 00:17:07.405 "copy": false, 00:17:07.405 "nvme_iov_md": false 00:17:07.405 }, 00:17:07.405 "driver_specific": { 00:17:07.405 "lvol": { 00:17:07.405 "lvol_store_uuid": "23b36f27-eb54-4f10-b337-a75cbb16fb17", 00:17:07.405 "base_bdev": "nvme0n1", 00:17:07.405 "thin_provision": true, 00:17:07.405 "num_allocated_clusters": 0, 00:17:07.405 "snapshot": false, 00:17:07.405 "clone": false, 00:17:07.405 "esnap_clone": false 00:17:07.405 } 00:17:07.405 } 00:17:07.405 } 00:17:07.405 ]' 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:07.405 23:29:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2e5c2493-8461-465e-9fd3-10d2eef904f5 --l2p_dram_limit 10' 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:07.405 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:07.405 23:29:53 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2e5c2493-8461-465e-9fd3-10d2eef904f5 --l2p_dram_limit 10 -c nvc0n1p0 00:17:07.668 [2024-11-19 23:29:53.765729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.765775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:07.668 [2024-11-19 23:29:53.765785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:07.668 [2024-11-19 23:29:53.765793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.765835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.765844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:07.668 [2024-11-19 23:29:53.765851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:07.668 [2024-11-19 23:29:53.765860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.765881] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:07.668 [2024-11-19 23:29:53.766091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:07.668 [2024-11-19 23:29:53.766104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.766111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:07.668 [2024-11-19 23:29:53.766118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:17:07.668 [2024-11-19 23:29:53.766125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.766152] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c164ffe3-a6bc-4c97-b488-20bc6f7701ea 00:17:07.668 [2024-11-19 23:29:53.767118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.767145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:07.668 [2024-11-19 23:29:53.767156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:07.668 [2024-11-19 23:29:53.767162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.771881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.771906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:07.668 [2024-11-19 23:29:53.771915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.663 ms 00:17:07.668 [2024-11-19 23:29:53.771921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.771978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.771986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:07.668 [2024-11-19 23:29:53.772001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:07.668 [2024-11-19 23:29:53.772007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.772044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.772052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:07.668 [2024-11-19 23:29:53.772059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:07.668 [2024-11-19 23:29:53.772065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.772084] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:07.668 [2024-11-19 23:29:53.773346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.773461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:07.668 [2024-11-19 23:29:53.773476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:17:07.668 [2024-11-19 23:29:53.773483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.773511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.773519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:07.668 [2024-11-19 23:29:53.773525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:07.668 [2024-11-19 23:29:53.773537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.773557] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:07.668 [2024-11-19 23:29:53.773665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:07.668 [2024-11-19 23:29:53.773673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:07.668 [2024-11-19 23:29:53.773689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:07.668 [2024-11-19 23:29:53.773697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:07.668 [2024-11-19 23:29:53.773708] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:07.668 [2024-11-19 23:29:53.773714] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:07.668 [2024-11-19 23:29:53.773723] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:07.668 [2024-11-19 23:29:53.773744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:07.668 [2024-11-19 23:29:53.773751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:07.668 [2024-11-19 23:29:53.773757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.773764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:07.668 [2024-11-19 23:29:53.773770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:07.668 [2024-11-19 23:29:53.773777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.773842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.668 [2024-11-19 23:29:53.773851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:07.668 [2024-11-19 23:29:53.773857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:07.668 [2024-11-19 23:29:53.773864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.668 [2024-11-19 23:29:53.773936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:07.668 [2024-11-19 23:29:53.773946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:07.668 [2024-11-19 23:29:53.773953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.668 [2024-11-19 23:29:53.773960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.668 [2024-11-19 23:29:53.773966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:07.668 [2024-11-19 23:29:53.773973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:07.668 [2024-11-19 23:29:53.773978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:07.668 [2024-11-19 23:29:53.773984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:07.668 [2024-11-19 23:29:53.773989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:07.668 [2024-11-19 23:29:53.773995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.668 [2024-11-19 23:29:53.774000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:07.668 [2024-11-19 23:29:53.774006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:07.668 [2024-11-19 23:29:53.774012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:07.668 [2024-11-19 23:29:53.774020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:07.668 [2024-11-19 23:29:53.774025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:07.668 [2024-11-19 23:29:53.774031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:07.668 [2024-11-19 23:29:53.774043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:07.668 [2024-11-19 23:29:53.774048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:07.668 [2024-11-19 23:29:53.774059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.668 [2024-11-19 23:29:53.774070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:07.668 [2024-11-19 23:29:53.774077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.668 [2024-11-19 23:29:53.774089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:07.668 [2024-11-19 23:29:53.774094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.668 [2024-11-19 23:29:53.774105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:07.668 [2024-11-19 23:29:53.774114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:07.668 [2024-11-19 23:29:53.774124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:07.668 [2024-11-19 23:29:53.774130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.668 [2024-11-19 23:29:53.774141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:07.668 [2024-11-19 23:29:53.774147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:07.668 [2024-11-19 23:29:53.774151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:07.668 [2024-11-19 23:29:53.774158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:07.668 [2024-11-19 23:29:53.774162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:07.668 [2024-11-19 23:29:53.774169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.668 [2024-11-19 23:29:53.774173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:07.668 [2024-11-19 23:29:53.774180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:07.668 [2024-11-19 23:29:53.774184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.669 [2024-11-19 23:29:53.774190] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:07.669 [2024-11-19 23:29:53.774196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:07.669 [2024-11-19 23:29:53.774204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:07.669 [2024-11-19 23:29:53.774209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:07.669 [2024-11-19 23:29:53.774219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:07.669 [2024-11-19 23:29:53.774224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:07.669 [2024-11-19 23:29:53.774230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:07.669 [2024-11-19 23:29:53.774235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:07.669 [2024-11-19 23:29:53.774241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:07.669 [2024-11-19 23:29:53.774246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:07.669 [2024-11-19 23:29:53.774256] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:07.669 [2024-11-19 23:29:53.774264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:07.669 [2024-11-19 23:29:53.774279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:07.669 [2024-11-19 23:29:53.774286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:07.669 [2024-11-19 23:29:53.774291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:07.669 [2024-11-19 23:29:53.774298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:07.669 [2024-11-19 23:29:53.774303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:07.669 [2024-11-19 23:29:53.774311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:07.669 [2024-11-19 23:29:53.774316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:07.669 [2024-11-19 23:29:53.774323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:07.669 [2024-11-19 23:29:53.774328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:07.669 [2024-11-19 23:29:53.774359] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:07.669 [2024-11-19 23:29:53.774365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:07.669 [2024-11-19 23:29:53.774378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:07.669 [2024-11-19 23:29:53.774385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:07.669 [2024-11-19 23:29:53.774390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:07.669 [2024-11-19 23:29:53.774397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:07.669 [2024-11-19 23:29:53.774402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:07.669 [2024-11-19 23:29:53.774410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:07.669 [2024-11-19 23:29:53.774416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:07.669 [2024-11-19 23:29:53.774444] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:07.669 [2024-11-19 23:29:53.774451] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:11.877 [2024-11-19 23:29:57.829639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.829716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:11.877 [2024-11-19 23:29:57.829755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4055.170 ms 00:17:11.877 [2024-11-19 23:29:57.829765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.843332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.843559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.877 [2024-11-19 23:29:57.843587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.445 ms 00:17:11.877 [2024-11-19 23:29:57.843597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.843712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.843728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:11.877 [2024-11-19 23:29:57.843882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:11.877 [2024-11-19 23:29:57.843892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.856350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.856402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.877 [2024-11-19 23:29:57.856416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.392 ms 00:17:11.877 [2024-11-19 23:29:57.856428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.856465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.856474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.877 [2024-11-19 23:29:57.856485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:11.877 [2024-11-19 23:29:57.856494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.857080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.857114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.877 [2024-11-19 23:29:57.857130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:11.877 [2024-11-19 23:29:57.857141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.857269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.857280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.877 [2024-11-19 23:29:57.857293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:11.877 [2024-11-19 23:29:57.857302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.865577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.865621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.877 [2024-11-19 23:29:57.865633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.246 ms 00:17:11.877 [2024-11-19 23:29:57.865642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.875308] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:11.877 [2024-11-19 23:29:57.879054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.879104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:11.877 [2024-11-19 23:29:57.879116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.343 ms 00:17:11.877 [2024-11-19 23:29:57.879126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.983207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.983282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:11.877 [2024-11-19 23:29:57.983298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.048 ms 00:17:11.877 [2024-11-19 23:29:57.983313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.983525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.983541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:11.877 [2024-11-19 23:29:57.983551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:17:11.877 [2024-11-19 23:29:57.983562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.989504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.989714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:11.877 [2024-11-19 23:29:57.989753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.915 ms 00:17:11.877 [2024-11-19 23:29:57.989765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.995242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.995309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:11.877 [2024-11-19 23:29:57.995323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.430 ms 00:17:11.877 [2024-11-19 23:29:57.995333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:57.995680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:57.995697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.877 [2024-11-19 23:29:57.995707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:17:11.877 [2024-11-19 23:29:57.995719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:58.042914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:58.043118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:11.877 [2024-11-19 23:29:58.043144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.131 ms 00:17:11.877 [2024-11-19 23:29:58.043156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.877 [2024-11-19 23:29:58.050244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.877 [2024-11-19 23:29:58.050444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:11.877 [2024-11-19 23:29:58.050464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.016 ms 00:17:11.878 [2024-11-19 23:29:58.050476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.878 [2024-11-19 23:29:58.057682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.878 [2024-11-19 23:29:58.057919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:11.878 [2024-11-19 23:29:58.057941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:17:11.878 [2024-11-19 23:29:58.057952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.878 [2024-11-19 23:29:58.064214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.878 [2024-11-19 23:29:58.064276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:11.878 [2024-11-19 23:29:58.064287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:17:11.878 [2024-11-19 23:29:58.064301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.878 [2024-11-19 23:29:58.064355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.878 [2024-11-19 23:29:58.064368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:11.878 [2024-11-19 23:29:58.064377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:11.878 [2024-11-19 23:29:58.064396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.878 [2024-11-19 23:29:58.064486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.878 [2024-11-19 23:29:58.064500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:11.878 [2024-11-19 23:29:58.064509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:11.878 [2024-11-19 23:29:58.064522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.878 [2024-11-19 23:29:58.065682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4299.488 ms, result 0 00:17:12.142 { 00:17:12.142 "name": "ftl0", 00:17:12.142 "uuid": "c164ffe3-a6bc-4c97-b488-20bc6f7701ea" 00:17:12.142 } 00:17:12.142 23:29:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:12.142 23:29:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:12.143 23:29:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:12.143 23:29:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:12.408 [2024-11-19 23:29:58.515631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.515901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:12.408 [2024-11-19 23:29:58.515936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:12.408 [2024-11-19 23:29:58.515946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.515985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.408 [2024-11-19 23:29:58.516752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.516797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:12.408 [2024-11-19 23:29:58.516810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:17:12.408 [2024-11-19 23:29:58.516822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.517092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.517113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:12.408 [2024-11-19 23:29:58.517125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:12.408 [2024-11-19 23:29:58.517143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.520421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.520447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:12.408 [2024-11-19 23:29:58.520457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:17:12.408 [2024-11-19 23:29:58.520466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.526676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.526724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:12.408 [2024-11-19 23:29:58.526752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:17:12.408 [2024-11-19 23:29:58.526765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.529752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.529810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:12.408 [2024-11-19 23:29:58.529820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:17:12.408 [2024-11-19 23:29:58.529830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.536357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.536412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:12.408 [2024-11-19 23:29:58.536423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.480 ms 00:17:12.408 [2024-11-19 23:29:58.536434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.536568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.536582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:12.408 [2024-11-19 23:29:58.536598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:12.408 [2024-11-19 23:29:58.536608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.539886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.540083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:12.408 [2024-11-19 23:29:58.540102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:17:12.408 [2024-11-19 23:29:58.540112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.542909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.542967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:12.408 [2024-11-19 23:29:58.542978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:17:12.408 [2024-11-19 23:29:58.542987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.545210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.545265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:12.408 [2024-11-19 23:29:58.545275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.179 ms 00:17:12.408 [2024-11-19 23:29:58.545284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.547397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.408 [2024-11-19 23:29:58.547452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:12.408 [2024-11-19 23:29:58.547463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:17:12.408 [2024-11-19 23:29:58.547472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.408 [2024-11-19 23:29:58.547514] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:12.408 [2024-11-19 23:29:58.547531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:12.408 [2024-11-19 23:29:58.547541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:12.408 [2024-11-19 23:29:58.547551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:12.408 [2024-11-19 23:29:58.547559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:12.408 [2024-11-19 23:29:58.547575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:12.408 [2024-11-19 23:29:58.547583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.547982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:12.409 [2024-11-19 23:29:58.548438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:12.410 [2024-11-19 23:29:58.548446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:12.410 [2024-11-19 23:29:58.548456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:12.410 [2024-11-19 23:29:58.548462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:12.410 [2024-11-19 23:29:58.548472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:12.410 [2024-11-19 23:29:58.548479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:12.410 [2024-11-19 23:29:58.548501] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:12.410 [2024-11-19 23:29:58.548509] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c164ffe3-a6bc-4c97-b488-20bc6f7701ea 00:17:12.410 [2024-11-19 23:29:58.548525] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:12.410 [2024-11-19 23:29:58.548533] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:12.410 [2024-11-19 23:29:58.548542] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:12.410 [2024-11-19 23:29:58.548550] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:12.410 [2024-11-19 23:29:58.548559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:12.410 [2024-11-19 23:29:58.548571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:12.410 [2024-11-19 23:29:58.548581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:12.410 [2024-11-19 23:29:58.548587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:12.410 [2024-11-19 23:29:58.548596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:12.410 [2024-11-19 23:29:58.548603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.410 [2024-11-19 23:29:58.548613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:12.410 [2024-11-19 23:29:58.548621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:17:12.410 [2024-11-19 23:29:58.548631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.550915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.410 [2024-11-19 23:29:58.550965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:12.410 [2024-11-19 23:29:58.550975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:17:12.410 [2024-11-19 23:29:58.550988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.551095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.410 [2024-11-19 23:29:58.551106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:12.410 [2024-11-19 23:29:58.551114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:12.410 [2024-11-19 23:29:58.551123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.558982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.559037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.410 [2024-11-19 23:29:58.559056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.559066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.559133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.559144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.410 [2024-11-19 23:29:58.559152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.559163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.559243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.559259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.410 [2024-11-19 23:29:58.559267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.559280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.559302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.559313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.410 [2024-11-19 23:29:58.559321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.559331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.573018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.573073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.410 [2024-11-19 23:29:58.573090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.573100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.583424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.583482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.410 [2024-11-19 23:29:58.583494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.583505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.583581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.583597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.410 [2024-11-19 23:29:58.583605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.583616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.583666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.583678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.410 [2024-11-19 23:29:58.583686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.583696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.583787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.583800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.410 [2024-11-19 23:29:58.583808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.583818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.583855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.583869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:12.410 [2024-11-19 23:29:58.583877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.583891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.583934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.583948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.410 [2024-11-19 23:29:58.583957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.583966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.584047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:12.410 [2024-11-19 23:29:58.584061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.410 [2024-11-19 23:29:58.584070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:12.410 [2024-11-19 23:29:58.584081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.410 [2024-11-19 23:29:58.584219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.554 ms, result 0 00:17:12.410 true 00:17:12.671 23:29:58 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 85887 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85887 ']' 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85887 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85887 00:17:12.671 killing process with pid 85887 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85887' 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 85887 00:17:12.671 23:29:58 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 85887 00:17:18.078 23:30:03 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:22.287 262144+0 records in 00:17:22.287 262144+0 records out 00:17:22.287 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.37171 s, 246 MB/s 00:17:22.287 23:30:07 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:24.209 23:30:09 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.209 [2024-11-19 23:30:09.948005] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:17:24.209 [2024-11-19 23:30:09.948239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86106 ] 00:17:24.209 [2024-11-19 23:30:10.101853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.209 [2024-11-19 23:30:10.122301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.209 [2024-11-19 23:30:10.219128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.209 [2024-11-19 23:30:10.219206] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:24.209 [2024-11-19 23:30:10.380139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.209 [2024-11-19 23:30:10.380359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:24.209 [2024-11-19 23:30:10.380384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:24.209 [2024-11-19 23:30:10.380393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.209 [2024-11-19 23:30:10.380466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.209 [2024-11-19 23:30:10.380478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:24.209 [2024-11-19 23:30:10.380491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:24.209 [2024-11-19 23:30:10.380499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.209 [2024-11-19 23:30:10.380527] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:24.209 [2024-11-19 23:30:10.380803] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:24.209 [2024-11-19 23:30:10.380823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.209 [2024-11-19 23:30:10.380835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:24.209 [2024-11-19 23:30:10.380845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:17:24.209 [2024-11-19 23:30:10.380856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.209 [2024-11-19 23:30:10.382520] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:24.209 [2024-11-19 23:30:10.386254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.209 [2024-11-19 23:30:10.386308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:24.209 [2024-11-19 23:30:10.386327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.736 ms 00:17:24.209 [2024-11-19 23:30:10.386338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.209 [2024-11-19 23:30:10.386409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.209 [2024-11-19 23:30:10.386422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:24.209 [2024-11-19 23:30:10.386431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:24.209 [2024-11-19 23:30:10.386438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.209 [2024-11-19 23:30:10.394470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.209 [2024-11-19 23:30:10.394662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:24.209 [2024-11-19 23:30:10.394685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.990 ms 00:17:24.209 [2024-11-19 23:30:10.394694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.209 [2024-11-19 23:30:10.394816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.210 [2024-11-19 23:30:10.394827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:24.210 [2024-11-19 23:30:10.394837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:24.210 [2024-11-19 23:30:10.394851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.210 [2024-11-19 23:30:10.394919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.210 [2024-11-19 23:30:10.394929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:24.210 [2024-11-19 23:30:10.394937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:24.210 [2024-11-19 23:30:10.394944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.210 [2024-11-19 23:30:10.394969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:24.210 [2024-11-19 23:30:10.397038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.210 [2024-11-19 23:30:10.397075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:24.210 [2024-11-19 23:30:10.397085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:17:24.210 [2024-11-19 23:30:10.397093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.210 [2024-11-19 23:30:10.397127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.210 [2024-11-19 23:30:10.397135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:24.210 [2024-11-19 23:30:10.397144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:24.210 [2024-11-19 23:30:10.397153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.210 [2024-11-19 23:30:10.397183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:24.210 [2024-11-19 23:30:10.397203] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:24.210 [2024-11-19 23:30:10.397245] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:24.210 [2024-11-19 23:30:10.397261] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:24.210 [2024-11-19 23:30:10.397367] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:24.210 [2024-11-19 23:30:10.397378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:24.210 [2024-11-19 23:30:10.397388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:24.210 [2024-11-19 23:30:10.397403] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:24.210 [2024-11-19 23:30:10.397412] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:24.210 [2024-11-19 23:30:10.397420] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:24.210 [2024-11-19 23:30:10.397427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:24.210 [2024-11-19 23:30:10.397435] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:24.210 [2024-11-19 23:30:10.397443] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:24.210 [2024-11-19 23:30:10.397452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.210 [2024-11-19 23:30:10.397460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:24.210 [2024-11-19 23:30:10.397467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:24.210 [2024-11-19 23:30:10.397477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.210 [2024-11-19 23:30:10.397561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.210 [2024-11-19 23:30:10.397575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:24.210 [2024-11-19 23:30:10.397583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:24.210 [2024-11-19 23:30:10.397591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.210 [2024-11-19 23:30:10.397692] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:24.210 [2024-11-19 23:30:10.397704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:24.210 [2024-11-19 23:30:10.397714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:24.210 [2024-11-19 23:30:10.397723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.210 [2024-11-19 23:30:10.397939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:24.210 [2024-11-19 23:30:10.397977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:24.210 [2024-11-19 23:30:10.398001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:24.210 [2024-11-19 23:30:10.398022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:24.210 [2024-11-19 23:30:10.398044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:24.210 [2024-11-19 23:30:10.398066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:24.210 [2024-11-19 23:30:10.398090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:24.210 [2024-11-19 23:30:10.398111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:24.210 [2024-11-19 23:30:10.398129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:24.210 [2024-11-19 23:30:10.398147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:24.210 [2024-11-19 23:30:10.398166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:24.210 [2024-11-19 23:30:10.398240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.210 [2024-11-19 23:30:10.398264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:24.210 [2024-11-19 23:30:10.398283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:24.210 [2024-11-19 23:30:10.398301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.210 [2024-11-19 23:30:10.398320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:24.472 [2024-11-19 23:30:10.398828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:24.472 [2024-11-19 23:30:10.398855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.472 [2024-11-19 23:30:10.398866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:24.472 [2024-11-19 23:30:10.398874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:24.472 [2024-11-19 23:30:10.398881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.472 [2024-11-19 23:30:10.398889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:24.472 [2024-11-19 23:30:10.398907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:24.472 [2024-11-19 23:30:10.398915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.472 [2024-11-19 23:30:10.398922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:24.472 [2024-11-19 23:30:10.398928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:24.473 [2024-11-19 23:30:10.398935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:24.473 [2024-11-19 23:30:10.398942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:24.473 [2024-11-19 23:30:10.398949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:24.473 [2024-11-19 23:30:10.398956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:24.473 [2024-11-19 23:30:10.398963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:24.473 [2024-11-19 23:30:10.398970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:24.473 [2024-11-19 23:30:10.398977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:24.473 [2024-11-19 23:30:10.398984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:24.473 [2024-11-19 23:30:10.398991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:24.473 [2024-11-19 23:30:10.398999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.473 [2024-11-19 23:30:10.399006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:24.473 [2024-11-19 23:30:10.399014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:24.473 [2024-11-19 23:30:10.399023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.473 [2024-11-19 23:30:10.399031] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:24.473 [2024-11-19 23:30:10.399040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:24.473 [2024-11-19 23:30:10.399054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:24.473 [2024-11-19 23:30:10.399061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:24.473 [2024-11-19 23:30:10.399069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:24.473 [2024-11-19 23:30:10.399076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:24.473 [2024-11-19 23:30:10.399083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:24.473 [2024-11-19 23:30:10.399090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:24.473 [2024-11-19 23:30:10.399097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:24.473 [2024-11-19 23:30:10.399104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:24.473 [2024-11-19 23:30:10.399114] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:24.473 [2024-11-19 23:30:10.399125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:24.473 [2024-11-19 23:30:10.399143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:24.473 [2024-11-19 23:30:10.399150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:24.473 [2024-11-19 23:30:10.399160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:24.473 [2024-11-19 23:30:10.399167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:24.473 [2024-11-19 23:30:10.399174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:24.473 [2024-11-19 23:30:10.399182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:24.473 [2024-11-19 23:30:10.399189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:24.473 [2024-11-19 23:30:10.399196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:24.473 [2024-11-19 23:30:10.399203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:24.473 [2024-11-19 23:30:10.399240] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:24.473 [2024-11-19 23:30:10.399248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:24.473 [2024-11-19 23:30:10.399266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:24.473 [2024-11-19 23:30:10.399273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:24.473 [2024-11-19 23:30:10.399283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:24.473 [2024-11-19 23:30:10.399294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.399302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:24.473 [2024-11-19 23:30:10.399311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:17:24.473 [2024-11-19 23:30:10.399319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.413361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.413540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:24.473 [2024-11-19 23:30:10.413558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.943 ms 00:17:24.473 [2024-11-19 23:30:10.413567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.413662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.413671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:24.473 [2024-11-19 23:30:10.413680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:24.473 [2024-11-19 23:30:10.413688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.441997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.442281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:24.473 [2024-11-19 23:30:10.442317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.220 ms 00:17:24.473 [2024-11-19 23:30:10.442334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.442413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.442435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:24.473 [2024-11-19 23:30:10.442468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:24.473 [2024-11-19 23:30:10.442488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.443197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.443256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:24.473 [2024-11-19 23:30:10.443276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:17:24.473 [2024-11-19 23:30:10.443305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.443561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.443579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:24.473 [2024-11-19 23:30:10.443596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:24.473 [2024-11-19 23:30:10.443611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.451579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.451631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:24.473 [2024-11-19 23:30:10.451648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.931 ms 00:17:24.473 [2024-11-19 23:30:10.451656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.455182] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:24.473 [2024-11-19 23:30:10.455240] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:24.473 [2024-11-19 23:30:10.455253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.455262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:24.473 [2024-11-19 23:30:10.455271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:17:24.473 [2024-11-19 23:30:10.455278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.471004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.471050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:24.473 [2024-11-19 23:30:10.471066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.668 ms 00:17:24.473 [2024-11-19 23:30:10.471074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.474041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.474087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:24.473 [2024-11-19 23:30:10.474098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.932 ms 00:17:24.473 [2024-11-19 23:30:10.474106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.476677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.476723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:24.473 [2024-11-19 23:30:10.476757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:17:24.473 [2024-11-19 23:30:10.476765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.473 [2024-11-19 23:30:10.477142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.473 [2024-11-19 23:30:10.477168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:24.473 [2024-11-19 23:30:10.477178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:24.473 [2024-11-19 23:30:10.477185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.503371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.503434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:24.474 [2024-11-19 23:30:10.503447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.168 ms 00:17:24.474 [2024-11-19 23:30:10.503456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.512229] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:24.474 [2024-11-19 23:30:10.515381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.515545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:24.474 [2024-11-19 23:30:10.515578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.878 ms 00:17:24.474 [2024-11-19 23:30:10.515587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.515666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.515677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:24.474 [2024-11-19 23:30:10.515686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:24.474 [2024-11-19 23:30:10.515694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.515784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.515796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:24.474 [2024-11-19 23:30:10.515805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:24.474 [2024-11-19 23:30:10.515817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.515837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.515846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:24.474 [2024-11-19 23:30:10.515855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:24.474 [2024-11-19 23:30:10.515865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.515901] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:24.474 [2024-11-19 23:30:10.515917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.515925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:24.474 [2024-11-19 23:30:10.515934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:24.474 [2024-11-19 23:30:10.515942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.521475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.521523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:24.474 [2024-11-19 23:30:10.521534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.512 ms 00:17:24.474 [2024-11-19 23:30:10.521542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.521625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:24.474 [2024-11-19 23:30:10.521639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:24.474 [2024-11-19 23:30:10.521649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:24.474 [2024-11-19 23:30:10.521662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:24.474 [2024-11-19 23:30:10.522802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.177 ms, result 0 00:17:25.420  [2024-11-19T23:30:12.557Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-19T23:30:13.946Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-19T23:30:14.890Z] Copying: 61/1024 [MB] (27 MBps) [2024-11-19T23:30:15.834Z] Copying: 77/1024 [MB] (15 MBps) [2024-11-19T23:30:16.778Z] Copying: 117/1024 [MB] (40 MBps) [2024-11-19T23:30:17.721Z] Copying: 149/1024 [MB] (31 MBps) [2024-11-19T23:30:18.663Z] Copying: 170/1024 [MB] (20 MBps) [2024-11-19T23:30:19.608Z] Copying: 188/1024 [MB] (18 MBps) [2024-11-19T23:30:20.553Z] Copying: 209/1024 [MB] (21 MBps) [2024-11-19T23:30:21.941Z] Copying: 233/1024 [MB] (23 MBps) [2024-11-19T23:30:22.884Z] Copying: 249/1024 [MB] (16 MBps) [2024-11-19T23:30:23.828Z] Copying: 260/1024 [MB] (11 MBps) [2024-11-19T23:30:24.770Z] Copying: 271/1024 [MB] (10 MBps) [2024-11-19T23:30:25.716Z] Copying: 282/1024 [MB] (10 MBps) [2024-11-19T23:30:26.661Z] Copying: 293/1024 [MB] (10 MBps) [2024-11-19T23:30:27.615Z] Copying: 314/1024 [MB] (21 MBps) [2024-11-19T23:30:28.556Z] Copying: 325/1024 [MB] (11 MBps) [2024-11-19T23:30:29.941Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-19T23:30:30.884Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-19T23:30:31.828Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-19T23:30:32.770Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-19T23:30:33.714Z] Copying: 394/1024 [MB] (27 MBps) [2024-11-19T23:30:34.676Z] Copying: 412/1024 [MB] (17 MBps) [2024-11-19T23:30:35.691Z] Copying: 429/1024 [MB] (17 MBps) [2024-11-19T23:30:36.648Z] Copying: 447/1024 [MB] (17 MBps) [2024-11-19T23:30:37.591Z] Copying: 462/1024 [MB] (15 MBps) [2024-11-19T23:30:38.535Z] Copying: 480/1024 [MB] (17 MBps) [2024-11-19T23:30:39.923Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-19T23:30:40.866Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-19T23:30:41.810Z] Copying: 511/1024 [MB] (10 MBps) [2024-11-19T23:30:42.753Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-19T23:30:43.696Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-19T23:30:44.642Z] Copying: 554/1024 [MB] (21 MBps) [2024-11-19T23:30:45.586Z] Copying: 569/1024 [MB] (14 MBps) [2024-11-19T23:30:46.973Z] Copying: 583/1024 [MB] (14 MBps) [2024-11-19T23:30:47.544Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-19T23:30:48.931Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-19T23:30:49.873Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-19T23:30:50.817Z] Copying: 632/1024 [MB] (16 MBps) [2024-11-19T23:30:51.776Z] Copying: 652/1024 [MB] (20 MBps) [2024-11-19T23:30:52.720Z] Copying: 674/1024 [MB] (22 MBps) [2024-11-19T23:30:53.664Z] Copying: 699/1024 [MB] (24 MBps) [2024-11-19T23:30:54.617Z] Copying: 716/1024 [MB] (16 MBps) [2024-11-19T23:30:55.559Z] Copying: 728/1024 [MB] (12 MBps) [2024-11-19T23:30:56.948Z] Copying: 746/1024 [MB] (17 MBps) [2024-11-19T23:30:57.890Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-19T23:30:58.833Z] Copying: 770/1024 [MB] (13 MBps) [2024-11-19T23:30:59.775Z] Copying: 791/1024 [MB] (20 MBps) [2024-11-19T23:31:00.722Z] Copying: 809/1024 [MB] (18 MBps) [2024-11-19T23:31:01.663Z] Copying: 828/1024 [MB] (19 MBps) [2024-11-19T23:31:02.607Z] Copying: 846/1024 [MB] (17 MBps) [2024-11-19T23:31:03.550Z] Copying: 867/1024 [MB] (21 MBps) [2024-11-19T23:31:04.967Z] Copying: 885/1024 [MB] (18 MBps) [2024-11-19T23:31:05.535Z] Copying: 898/1024 [MB] (12 MBps) [2024-11-19T23:31:06.908Z] Copying: 911/1024 [MB] (12 MBps) [2024-11-19T23:31:07.842Z] Copying: 924/1024 [MB] (13 MBps) [2024-11-19T23:31:08.775Z] Copying: 938/1024 [MB] (13 MBps) [2024-11-19T23:31:09.718Z] Copying: 962/1024 [MB] (24 MBps) [2024-11-19T23:31:10.661Z] Copying: 975/1024 [MB] (12 MBps) [2024-11-19T23:31:11.597Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-19T23:31:12.974Z] Copying: 999/1024 [MB] (13 MBps) [2024-11-19T23:31:12.974Z] Copying: 1016/1024 [MB] (16 MBps) [2024-11-19T23:31:12.974Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 23:31:12.948376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.782 [2024-11-19 23:31:12.948445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:26.782 [2024-11-19 23:31:12.948464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.782 [2024-11-19 23:31:12.948479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.782 [2024-11-19 23:31:12.948508] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:26.782 [2024-11-19 23:31:12.949459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.782 [2024-11-19 23:31:12.949495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:26.782 [2024-11-19 23:31:12.949507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:18:26.782 [2024-11-19 23:31:12.949517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.782 [2024-11-19 23:31:12.952889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.782 [2024-11-19 23:31:12.952935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:26.782 [2024-11-19 23:31:12.952958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:18:26.782 [2024-11-19 23:31:12.952968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.973624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.973686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:27.045 [2024-11-19 23:31:12.973707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.638 ms 00:18:27.045 [2024-11-19 23:31:12.973717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.979994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.980039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:27.045 [2024-11-19 23:31:12.980061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:18:27.045 [2024-11-19 23:31:12.980071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.983143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.983194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:27.045 [2024-11-19 23:31:12.983205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:18:27.045 [2024-11-19 23:31:12.983214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.989228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.989334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:27.045 [2024-11-19 23:31:12.989358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.962 ms 00:18:27.045 [2024-11-19 23:31:12.989372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.989554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.989587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:27.045 [2024-11-19 23:31:12.989602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:18:27.045 [2024-11-19 23:31:12.989615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.992884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.992969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:27.045 [2024-11-19 23:31:12.992985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:18:27.045 [2024-11-19 23:31:12.992997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.996184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.996245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:27.045 [2024-11-19 23:31:12.996260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.128 ms 00:18:27.045 [2024-11-19 23:31:12.996272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:12.998769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:12.998825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:27.045 [2024-11-19 23:31:12.998840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:18:27.045 [2024-11-19 23:31:12.998851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:13.001288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.045 [2024-11-19 23:31:13.001340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:27.045 [2024-11-19 23:31:13.001351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:18:27.045 [2024-11-19 23:31:13.001359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.045 [2024-11-19 23:31:13.001402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:27.045 [2024-11-19 23:31:13.001429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:27.045 [2024-11-19 23:31:13.001723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.001993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:27.046 [2024-11-19 23:31:13.002301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:27.046 [2024-11-19 23:31:13.002309] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c164ffe3-a6bc-4c97-b488-20bc6f7701ea 00:18:27.046 [2024-11-19 23:31:13.002317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:27.046 [2024-11-19 23:31:13.002326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:27.046 [2024-11-19 23:31:13.002334] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:27.046 [2024-11-19 23:31:13.002342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:27.046 [2024-11-19 23:31:13.002356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:27.046 [2024-11-19 23:31:13.002364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:27.046 [2024-11-19 23:31:13.002372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:27.046 [2024-11-19 23:31:13.002379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:27.046 [2024-11-19 23:31:13.002387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:27.046 [2024-11-19 23:31:13.002394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.046 [2024-11-19 23:31:13.002409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:27.046 [2024-11-19 23:31:13.002430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:18:27.046 [2024-11-19 23:31:13.002438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.046 [2024-11-19 23:31:13.004909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.046 [2024-11-19 23:31:13.004942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:27.046 [2024-11-19 23:31:13.004954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:18:27.046 [2024-11-19 23:31:13.004963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.046 [2024-11-19 23:31:13.005090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.046 [2024-11-19 23:31:13.005107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:27.046 [2024-11-19 23:31:13.005117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:27.046 [2024-11-19 23:31:13.005125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.046 [2024-11-19 23:31:13.013182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.046 [2024-11-19 23:31:13.013368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.046 [2024-11-19 23:31:13.013426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.046 [2024-11-19 23:31:13.013450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.046 [2024-11-19 23:31:13.013540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.046 [2024-11-19 23:31:13.013571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.046 [2024-11-19 23:31:13.013590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.046 [2024-11-19 23:31:13.013609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.013680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.013705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.047 [2024-11-19 23:31:13.013726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.013820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.013856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.013878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.047 [2024-11-19 23:31:13.013907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.013926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.027767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.027939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.047 [2024-11-19 23:31:13.028009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.028034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.047 [2024-11-19 23:31:13.038366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.047 [2024-11-19 23:31:13.038444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.047 [2024-11-19 23:31:13.038506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.047 [2024-11-19 23:31:13.038609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:27.047 [2024-11-19 23:31:13.038671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.047 [2024-11-19 23:31:13.038766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:27.047 [2024-11-19 23:31:13.038833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.047 [2024-11-19 23:31:13.038842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:27.047 [2024-11-19 23:31:13.038854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.047 [2024-11-19 23:31:13.038989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 90.581 ms, result 0 00:18:27.619 00:18:27.619 00:18:27.619 23:31:13 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:27.619 [2024-11-19 23:31:13.667817] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:18:27.619 [2024-11-19 23:31:13.667981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86766 ] 00:18:27.880 [2024-11-19 23:31:13.830589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.880 [2024-11-19 23:31:13.859959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.880 [2024-11-19 23:31:13.971621] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:27.880 [2024-11-19 23:31:13.971704] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:28.143 [2024-11-19 23:31:14.133454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.133702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:28.143 [2024-11-19 23:31:14.133758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:28.143 [2024-11-19 23:31:14.133768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.133843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.133854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.143 [2024-11-19 23:31:14.133863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:28.143 [2024-11-19 23:31:14.133879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.133907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:28.143 [2024-11-19 23:31:14.134314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:28.143 [2024-11-19 23:31:14.134351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.134361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.143 [2024-11-19 23:31:14.134372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:18:28.143 [2024-11-19 23:31:14.134386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.136131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:28.143 [2024-11-19 23:31:14.140288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.140518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:28.143 [2024-11-19 23:31:14.140539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.159 ms 00:18:28.143 [2024-11-19 23:31:14.140555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.140711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.140782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:28.143 [2024-11-19 23:31:14.140793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:28.143 [2024-11-19 23:31:14.140801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.149364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.149409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.143 [2024-11-19 23:31:14.149429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.512 ms 00:18:28.143 [2024-11-19 23:31:14.149438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.149544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.149555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.143 [2024-11-19 23:31:14.149564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:28.143 [2024-11-19 23:31:14.149575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.149642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.149662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:28.143 [2024-11-19 23:31:14.149671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:28.143 [2024-11-19 23:31:14.149680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.149708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.143 [2024-11-19 23:31:14.151884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.151923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.143 [2024-11-19 23:31:14.151933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:18:28.143 [2024-11-19 23:31:14.151950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.152000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.152010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:28.143 [2024-11-19 23:31:14.152019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:28.143 [2024-11-19 23:31:14.152027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.152057] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:28.143 [2024-11-19 23:31:14.152081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:28.143 [2024-11-19 23:31:14.152122] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:28.143 [2024-11-19 23:31:14.152142] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:28.143 [2024-11-19 23:31:14.152249] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:28.143 [2024-11-19 23:31:14.152263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:28.143 [2024-11-19 23:31:14.152274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:28.143 [2024-11-19 23:31:14.152289] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:28.143 [2024-11-19 23:31:14.152303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:28.143 [2024-11-19 23:31:14.152313] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:28.143 [2024-11-19 23:31:14.152322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:28.143 [2024-11-19 23:31:14.152330] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:28.143 [2024-11-19 23:31:14.152341] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:28.143 [2024-11-19 23:31:14.152354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.152362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:28.143 [2024-11-19 23:31:14.152373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:18:28.143 [2024-11-19 23:31:14.152384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.143 [2024-11-19 23:31:14.152468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.143 [2024-11-19 23:31:14.152481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:28.144 [2024-11-19 23:31:14.152490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:28.144 [2024-11-19 23:31:14.152497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.144 [2024-11-19 23:31:14.152600] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:28.144 [2024-11-19 23:31:14.152614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:28.144 [2024-11-19 23:31:14.152624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:28.144 [2024-11-19 23:31:14.152663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:28.144 [2024-11-19 23:31:14.152693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.144 [2024-11-19 23:31:14.152715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:28.144 [2024-11-19 23:31:14.152726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:28.144 [2024-11-19 23:31:14.152758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.144 [2024-11-19 23:31:14.152767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:28.144 [2024-11-19 23:31:14.152776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:28.144 [2024-11-19 23:31:14.152785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:28.144 [2024-11-19 23:31:14.152805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:28.144 [2024-11-19 23:31:14.152832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:28.144 [2024-11-19 23:31:14.152857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:28.144 [2024-11-19 23:31:14.152887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:28.144 [2024-11-19 23:31:14.152915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.144 [2024-11-19 23:31:14.152928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:28.144 [2024-11-19 23:31:14.152936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.144 [2024-11-19 23:31:14.152950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:28.144 [2024-11-19 23:31:14.152956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:28.144 [2024-11-19 23:31:14.152962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.144 [2024-11-19 23:31:14.152969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:28.144 [2024-11-19 23:31:14.152976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:28.144 [2024-11-19 23:31:14.152983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.152990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:28.144 [2024-11-19 23:31:14.152997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:28.144 [2024-11-19 23:31:14.153009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.153016] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:28.144 [2024-11-19 23:31:14.153025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:28.144 [2024-11-19 23:31:14.153036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.144 [2024-11-19 23:31:14.153044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.144 [2024-11-19 23:31:14.153052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:28.144 [2024-11-19 23:31:14.153059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:28.144 [2024-11-19 23:31:14.153066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:28.144 [2024-11-19 23:31:14.153084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:28.144 [2024-11-19 23:31:14.153091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:28.144 [2024-11-19 23:31:14.153098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:28.144 [2024-11-19 23:31:14.153107] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:28.144 [2024-11-19 23:31:14.153116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:28.144 [2024-11-19 23:31:14.153135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:28.144 [2024-11-19 23:31:14.153142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:28.144 [2024-11-19 23:31:14.153152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:28.144 [2024-11-19 23:31:14.153159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:28.144 [2024-11-19 23:31:14.153168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:28.144 [2024-11-19 23:31:14.153176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:28.144 [2024-11-19 23:31:14.153183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:28.144 [2024-11-19 23:31:14.153190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:28.144 [2024-11-19 23:31:14.153198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:28.144 [2024-11-19 23:31:14.153235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:28.144 [2024-11-19 23:31:14.153244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:28.144 [2024-11-19 23:31:14.153259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:28.144 [2024-11-19 23:31:14.153267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:28.144 [2024-11-19 23:31:14.153279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:28.144 [2024-11-19 23:31:14.153287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.144 [2024-11-19 23:31:14.153296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:28.144 [2024-11-19 23:31:14.153305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:18:28.144 [2024-11-19 23:31:14.153313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.144 [2024-11-19 23:31:14.168226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.144 [2024-11-19 23:31:14.168274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:28.144 [2024-11-19 23:31:14.168287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.855 ms 00:18:28.145 [2024-11-19 23:31:14.168303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.168397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.168407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:28.145 [2024-11-19 23:31:14.168420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:28.145 [2024-11-19 23:31:14.168432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.192316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.192384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:28.145 [2024-11-19 23:31:14.192402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.820 ms 00:18:28.145 [2024-11-19 23:31:14.192415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.192475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.192488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:28.145 [2024-11-19 23:31:14.192501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:28.145 [2024-11-19 23:31:14.192520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.193151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.193191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:28.145 [2024-11-19 23:31:14.193207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:28.145 [2024-11-19 23:31:14.193221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.193422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.193448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:28.145 [2024-11-19 23:31:14.193461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:18:28.145 [2024-11-19 23:31:14.193472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.201443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.201499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:28.145 [2024-11-19 23:31:14.201516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.940 ms 00:18:28.145 [2024-11-19 23:31:14.201524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.205384] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:28.145 [2024-11-19 23:31:14.205574] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:28.145 [2024-11-19 23:31:14.205594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.205603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:28.145 [2024-11-19 23:31:14.205613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.978 ms 00:18:28.145 [2024-11-19 23:31:14.205621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.221683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.221749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:28.145 [2024-11-19 23:31:14.221763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.007 ms 00:18:28.145 [2024-11-19 23:31:14.221771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.224870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.224916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:28.145 [2024-11-19 23:31:14.224927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:18:28.145 [2024-11-19 23:31:14.224935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.227596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.227646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:28.145 [2024-11-19 23:31:14.227656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:18:28.145 [2024-11-19 23:31:14.227664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.228070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.228087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:28.145 [2024-11-19 23:31:14.228103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:28.145 [2024-11-19 23:31:14.228116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.254339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.254399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:28.145 [2024-11-19 23:31:14.254412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.196 ms 00:18:28.145 [2024-11-19 23:31:14.254421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.262507] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:28.145 [2024-11-19 23:31:14.265683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.265754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:28.145 [2024-11-19 23:31:14.265770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.211 ms 00:18:28.145 [2024-11-19 23:31:14.265779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.265858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.265870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:28.145 [2024-11-19 23:31:14.265880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:28.145 [2024-11-19 23:31:14.265888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.265957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.265970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:28.145 [2024-11-19 23:31:14.266060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:28.145 [2024-11-19 23:31:14.266070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.266091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.266100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:28.145 [2024-11-19 23:31:14.266108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:28.145 [2024-11-19 23:31:14.266122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.266156] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:28.145 [2024-11-19 23:31:14.266166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.266175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:28.145 [2024-11-19 23:31:14.266188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:28.145 [2024-11-19 23:31:14.266200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.271597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.271644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:28.145 [2024-11-19 23:31:14.271656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.377 ms 00:18:28.145 [2024-11-19 23:31:14.271664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.145 [2024-11-19 23:31:14.271771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.145 [2024-11-19 23:31:14.271785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:28.145 [2024-11-19 23:31:14.271794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:28.146 [2024-11-19 23:31:14.271803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.146 [2024-11-19 23:31:14.272928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.004 ms, result 0 00:18:29.534  [2024-11-19T23:31:16.673Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-19T23:31:17.617Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-19T23:31:18.561Z] Copying: 41/1024 [MB] (18 MBps) [2024-11-19T23:31:19.504Z] Copying: 52/1024 [MB] (10 MBps) [2024-11-19T23:31:20.890Z] Copying: 69/1024 [MB] (17 MBps) [2024-11-19T23:31:21.463Z] Copying: 83/1024 [MB] (13 MBps) [2024-11-19T23:31:22.850Z] Copying: 97/1024 [MB] (13 MBps) [2024-11-19T23:31:23.800Z] Copying: 107/1024 [MB] (10 MBps) [2024-11-19T23:31:24.745Z] Copying: 118/1024 [MB] (10 MBps) [2024-11-19T23:31:25.704Z] Copying: 128/1024 [MB] (10 MBps) [2024-11-19T23:31:26.649Z] Copying: 144/1024 [MB] (15 MBps) [2024-11-19T23:31:27.590Z] Copying: 155/1024 [MB] (10 MBps) [2024-11-19T23:31:28.539Z] Copying: 171/1024 [MB] (15 MBps) [2024-11-19T23:31:29.486Z] Copying: 186/1024 [MB] (14 MBps) [2024-11-19T23:31:30.879Z] Copying: 199/1024 [MB] (13 MBps) [2024-11-19T23:31:31.824Z] Copying: 212/1024 [MB] (12 MBps) [2024-11-19T23:31:32.838Z] Copying: 224/1024 [MB] (12 MBps) [2024-11-19T23:31:33.785Z] Copying: 235/1024 [MB] (11 MBps) [2024-11-19T23:31:34.731Z] Copying: 254/1024 [MB] (18 MBps) [2024-11-19T23:31:35.675Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-19T23:31:36.620Z] Copying: 281/1024 [MB] (15 MBps) [2024-11-19T23:31:37.565Z] Copying: 299/1024 [MB] (18 MBps) [2024-11-19T23:31:38.510Z] Copying: 316/1024 [MB] (17 MBps) [2024-11-19T23:31:39.459Z] Copying: 334/1024 [MB] (17 MBps) [2024-11-19T23:31:40.847Z] Copying: 347/1024 [MB] (13 MBps) [2024-11-19T23:31:41.804Z] Copying: 359/1024 [MB] (12 MBps) [2024-11-19T23:31:42.748Z] Copying: 371/1024 [MB] (11 MBps) [2024-11-19T23:31:43.692Z] Copying: 390/1024 [MB] (19 MBps) [2024-11-19T23:31:44.636Z] Copying: 409/1024 [MB] (18 MBps) [2024-11-19T23:31:45.582Z] Copying: 420/1024 [MB] (11 MBps) [2024-11-19T23:31:46.532Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-19T23:31:47.474Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-19T23:31:48.863Z] Copying: 464/1024 [MB] (22 MBps) [2024-11-19T23:31:49.807Z] Copying: 479/1024 [MB] (14 MBps) [2024-11-19T23:31:50.753Z] Copying: 504/1024 [MB] (24 MBps) [2024-11-19T23:31:51.697Z] Copying: 524/1024 [MB] (20 MBps) [2024-11-19T23:31:52.643Z] Copying: 541/1024 [MB] (17 MBps) [2024-11-19T23:31:53.589Z] Copying: 560/1024 [MB] (18 MBps) [2024-11-19T23:31:54.533Z] Copying: 582/1024 [MB] (22 MBps) [2024-11-19T23:31:55.477Z] Copying: 603/1024 [MB] (21 MBps) [2024-11-19T23:31:56.864Z] Copying: 624/1024 [MB] (20 MBps) [2024-11-19T23:31:57.811Z] Copying: 647/1024 [MB] (23 MBps) [2024-11-19T23:31:58.758Z] Copying: 673/1024 [MB] (26 MBps) [2024-11-19T23:31:59.705Z] Copying: 695/1024 [MB] (21 MBps) [2024-11-19T23:32:00.650Z] Copying: 717/1024 [MB] (21 MBps) [2024-11-19T23:32:01.616Z] Copying: 735/1024 [MB] (17 MBps) [2024-11-19T23:32:02.573Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-19T23:32:03.519Z] Copying: 762/1024 [MB] (16 MBps) [2024-11-19T23:32:04.464Z] Copying: 781/1024 [MB] (18 MBps) [2024-11-19T23:32:05.849Z] Copying: 794/1024 [MB] (13 MBps) [2024-11-19T23:32:06.793Z] Copying: 818/1024 [MB] (24 MBps) [2024-11-19T23:32:07.737Z] Copying: 834/1024 [MB] (15 MBps) [2024-11-19T23:32:08.680Z] Copying: 851/1024 [MB] (17 MBps) [2024-11-19T23:32:09.621Z] Copying: 872/1024 [MB] (20 MBps) [2024-11-19T23:32:10.564Z] Copying: 885/1024 [MB] (13 MBps) [2024-11-19T23:32:11.506Z] Copying: 907/1024 [MB] (21 MBps) [2024-11-19T23:32:12.890Z] Copying: 933/1024 [MB] (26 MBps) [2024-11-19T23:32:13.462Z] Copying: 947/1024 [MB] (14 MBps) [2024-11-19T23:32:14.851Z] Copying: 970/1024 [MB] (22 MBps) [2024-11-19T23:32:15.795Z] Copying: 985/1024 [MB] (15 MBps) [2024-11-19T23:32:16.740Z] Copying: 1001/1024 [MB] (15 MBps) [2024-11-19T23:32:17.003Z] Copying: 1019/1024 [MB] (18 MBps) [2024-11-19T23:32:17.003Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 23:32:16.799609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.799676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.811 [2024-11-19 23:32:16.799692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.811 [2024-11-19 23:32:16.799709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.799745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.811 [2024-11-19 23:32:16.800481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.800509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.811 [2024-11-19 23:32:16.800521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:19:30.811 [2024-11-19 23:32:16.800531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.800771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.800790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.811 [2024-11-19 23:32:16.800800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:19:30.811 [2024-11-19 23:32:16.800810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.804513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.804546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.811 [2024-11-19 23:32:16.804562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:19:30.811 [2024-11-19 23:32:16.804571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.810862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.810914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.811 [2024-11-19 23:32:16.810925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.273 ms 00:19:30.811 [2024-11-19 23:32:16.810934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.813841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.813891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.811 [2024-11-19 23:32:16.813901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:19:30.811 [2024-11-19 23:32:16.813910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.819067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.811 [2024-11-19 23:32:16.819118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.811 [2024-11-19 23:32:16.819129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.115 ms 00:19:30.811 [2024-11-19 23:32:16.819149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.811 [2024-11-19 23:32:16.819273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.812 [2024-11-19 23:32:16.819283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.812 [2024-11-19 23:32:16.819293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:30.812 [2024-11-19 23:32:16.819303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.812 [2024-11-19 23:32:16.823017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.812 [2024-11-19 23:32:16.823218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:30.812 [2024-11-19 23:32:16.823239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:19:30.812 [2024-11-19 23:32:16.823247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.812 [2024-11-19 23:32:16.825895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.812 [2024-11-19 23:32:16.825941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:30.812 [2024-11-19 23:32:16.825951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:19:30.812 [2024-11-19 23:32:16.825959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.812 [2024-11-19 23:32:16.828294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.812 [2024-11-19 23:32:16.828341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:30.812 [2024-11-19 23:32:16.828350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:19:30.812 [2024-11-19 23:32:16.828358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.812 [2024-11-19 23:32:16.830625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.812 [2024-11-19 23:32:16.830670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:30.812 [2024-11-19 23:32:16.830680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:19:30.812 [2024-11-19 23:32:16.830687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.812 [2024-11-19 23:32:16.830725] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:30.812 [2024-11-19 23:32:16.830766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.830999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:30.812 [2024-11-19 23:32:16.831240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:30.813 [2024-11-19 23:32:16.831578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:30.813 [2024-11-19 23:32:16.831586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c164ffe3-a6bc-4c97-b488-20bc6f7701ea 00:19:30.813 [2024-11-19 23:32:16.831594] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:30.813 [2024-11-19 23:32:16.831601] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:30.813 [2024-11-19 23:32:16.831609] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:30.813 [2024-11-19 23:32:16.831622] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:30.813 [2024-11-19 23:32:16.831629] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:30.813 [2024-11-19 23:32:16.831636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:30.813 [2024-11-19 23:32:16.831644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:30.813 [2024-11-19 23:32:16.831651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:30.813 [2024-11-19 23:32:16.831658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:30.813 [2024-11-19 23:32:16.831671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.813 [2024-11-19 23:32:16.831686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:30.813 [2024-11-19 23:32:16.831695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:19:30.813 [2024-11-19 23:32:16.831702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.833998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.813 [2024-11-19 23:32:16.834030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:30.813 [2024-11-19 23:32:16.834040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:19:30.813 [2024-11-19 23:32:16.834049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.834189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.813 [2024-11-19 23:32:16.834201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:30.813 [2024-11-19 23:32:16.834211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:30.813 [2024-11-19 23:32:16.834219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.841504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.813 [2024-11-19 23:32:16.841710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.813 [2024-11-19 23:32:16.841774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.813 [2024-11-19 23:32:16.841783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.841863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.813 [2024-11-19 23:32:16.841873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.813 [2024-11-19 23:32:16.841882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.813 [2024-11-19 23:32:16.841890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.841949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.813 [2024-11-19 23:32:16.841961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.813 [2024-11-19 23:32:16.841974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.813 [2024-11-19 23:32:16.841983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.841997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.813 [2024-11-19 23:32:16.842010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.813 [2024-11-19 23:32:16.842019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.813 [2024-11-19 23:32:16.842026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.855378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.813 [2024-11-19 23:32:16.855427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.813 [2024-11-19 23:32:16.855439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.813 [2024-11-19 23:32:16.855458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.865441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.813 [2024-11-19 23:32:16.865633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.813 [2024-11-19 23:32:16.865651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.813 [2024-11-19 23:32:16.865659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.813 [2024-11-19 23:32:16.865712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.814 [2024-11-19 23:32:16.865722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.814 [2024-11-19 23:32:16.865795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.814 [2024-11-19 23:32:16.865805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.814 [2024-11-19 23:32:16.865872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.814 [2024-11-19 23:32:16.865883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.814 [2024-11-19 23:32:16.865896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.814 [2024-11-19 23:32:16.865905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.814 [2024-11-19 23:32:16.865982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.814 [2024-11-19 23:32:16.865993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.814 [2024-11-19 23:32:16.866004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.814 [2024-11-19 23:32:16.866012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.814 [2024-11-19 23:32:16.866041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.814 [2024-11-19 23:32:16.866050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:30.814 [2024-11-19 23:32:16.866059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.814 [2024-11-19 23:32:16.866070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.814 [2024-11-19 23:32:16.866107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.814 [2024-11-19 23:32:16.866117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.814 [2024-11-19 23:32:16.866125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.814 [2024-11-19 23:32:16.866138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.814 [2024-11-19 23:32:16.866183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.814 [2024-11-19 23:32:16.866195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.814 [2024-11-19 23:32:16.866210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.814 [2024-11-19 23:32:16.866219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.814 [2024-11-19 23:32:16.866344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.705 ms, result 0 00:19:31.076 00:19:31.076 00:19:31.076 23:32:17 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:33.627 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:33.627 23:32:19 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:33.627 [2024-11-19 23:32:19.367228] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:19:33.627 [2024-11-19 23:32:19.367371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87446 ] 00:19:33.627 [2024-11-19 23:32:19.530620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.627 [2024-11-19 23:32:19.559045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.627 [2024-11-19 23:32:19.673457] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.627 [2024-11-19 23:32:19.673835] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.890 [2024-11-19 23:32:19.834620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.834683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.890 [2024-11-19 23:32:19.834698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.890 [2024-11-19 23:32:19.834706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.834788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.834801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.890 [2024-11-19 23:32:19.834810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:33.890 [2024-11-19 23:32:19.834818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.834843] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.890 [2024-11-19 23:32:19.835127] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.890 [2024-11-19 23:32:19.835148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.835161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.890 [2024-11-19 23:32:19.835171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:33.890 [2024-11-19 23:32:19.835181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.837047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:33.890 [2024-11-19 23:32:19.840999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.841048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:33.890 [2024-11-19 23:32:19.841059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:19:33.890 [2024-11-19 23:32:19.841074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.841155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.841169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:33.890 [2024-11-19 23:32:19.841179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:33.890 [2024-11-19 23:32:19.841186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.849267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.849310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.890 [2024-11-19 23:32:19.849326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.042 ms 00:19:33.890 [2024-11-19 23:32:19.849334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.849432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.849443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.890 [2024-11-19 23:32:19.849452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:33.890 [2024-11-19 23:32:19.849472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.849536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.849548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.890 [2024-11-19 23:32:19.849557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:33.890 [2024-11-19 23:32:19.849568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.849592] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.890 [2024-11-19 23:32:19.851674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.851713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.890 [2024-11-19 23:32:19.851724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:19:33.890 [2024-11-19 23:32:19.851753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.851788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.851796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.890 [2024-11-19 23:32:19.851805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:33.890 [2024-11-19 23:32:19.851820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.851845] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:33.890 [2024-11-19 23:32:19.851865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:33.890 [2024-11-19 23:32:19.851906] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:33.890 [2024-11-19 23:32:19.851923] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:33.890 [2024-11-19 23:32:19.852043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:33.890 [2024-11-19 23:32:19.852055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.890 [2024-11-19 23:32:19.852071] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:33.890 [2024-11-19 23:32:19.852082] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852092] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852106] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:33.890 [2024-11-19 23:32:19.852115] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.890 [2024-11-19 23:32:19.852123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:33.890 [2024-11-19 23:32:19.852132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:33.890 [2024-11-19 23:32:19.852140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.852151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.890 [2024-11-19 23:32:19.852159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:19:33.890 [2024-11-19 23:32:19.852169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.852257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.890 [2024-11-19 23:32:19.852268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.890 [2024-11-19 23:32:19.852278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:33.890 [2024-11-19 23:32:19.852291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.890 [2024-11-19 23:32:19.852394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.890 [2024-11-19 23:32:19.852407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.890 [2024-11-19 23:32:19.852419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.890 [2024-11-19 23:32:19.852453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.890 [2024-11-19 23:32:19.852483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.890 [2024-11-19 23:32:19.852503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.890 [2024-11-19 23:32:19.852512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:33.890 [2024-11-19 23:32:19.852521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.890 [2024-11-19 23:32:19.852529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.890 [2024-11-19 23:32:19.852540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:33.890 [2024-11-19 23:32:19.852548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.890 [2024-11-19 23:32:19.852565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.890 [2024-11-19 23:32:19.852591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.890 [2024-11-19 23:32:19.852615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.890 [2024-11-19 23:32:19.852648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.890 [2024-11-19 23:32:19.852664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.890 [2024-11-19 23:32:19.852672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:33.890 [2024-11-19 23:32:19.852679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.891 [2024-11-19 23:32:19.852689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.891 [2024-11-19 23:32:19.852697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:33.891 [2024-11-19 23:32:19.852703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.891 [2024-11-19 23:32:19.852711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.891 [2024-11-19 23:32:19.852718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:33.891 [2024-11-19 23:32:19.852725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.891 [2024-11-19 23:32:19.853002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:33.891 [2024-11-19 23:32:19.853030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:33.891 [2024-11-19 23:32:19.853050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.891 [2024-11-19 23:32:19.853070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:33.891 [2024-11-19 23:32:19.853090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:33.891 [2024-11-19 23:32:19.853115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.891 [2024-11-19 23:32:19.853136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.891 [2024-11-19 23:32:19.853162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.891 [2024-11-19 23:32:19.853183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.891 [2024-11-19 23:32:19.853292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.891 [2024-11-19 23:32:19.853318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.891 [2024-11-19 23:32:19.853337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.891 [2024-11-19 23:32:19.853357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.891 [2024-11-19 23:32:19.853375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.891 [2024-11-19 23:32:19.853396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.891 [2024-11-19 23:32:19.853415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.891 [2024-11-19 23:32:19.853435] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.891 [2024-11-19 23:32:19.853468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.853500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:33.891 [2024-11-19 23:32:19.853604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:33.891 [2024-11-19 23:32:19.853636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:33.891 [2024-11-19 23:32:19.853671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:33.891 [2024-11-19 23:32:19.853701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:33.891 [2024-11-19 23:32:19.853758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:33.891 [2024-11-19 23:32:19.853789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:33.891 [2024-11-19 23:32:19.853820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:33.891 [2024-11-19 23:32:19.853849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:33.891 [2024-11-19 23:32:19.853879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.853908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.853987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.854022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.854052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:33.891 [2024-11-19 23:32:19.854082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.891 [2024-11-19 23:32:19.854112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.854143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.891 [2024-11-19 23:32:19.854171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.891 [2024-11-19 23:32:19.854201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.891 [2024-11-19 23:32:19.854234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.891 [2024-11-19 23:32:19.854265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.854292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.891 [2024-11-19 23:32:19.854314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:19:33.891 [2024-11-19 23:32:19.854438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.868476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.868522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.891 [2024-11-19 23:32:19.868536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.950 ms 00:19:33.891 [2024-11-19 23:32:19.868545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.868636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.868647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:33.891 [2024-11-19 23:32:19.868657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:33.891 [2024-11-19 23:32:19.868666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.891262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.891327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.891 [2024-11-19 23:32:19.891345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.532 ms 00:19:33.891 [2024-11-19 23:32:19.891357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.891425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.891440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.891 [2024-11-19 23:32:19.891452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:33.891 [2024-11-19 23:32:19.891468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.892146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.892176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.891 [2024-11-19 23:32:19.892195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:19:33.891 [2024-11-19 23:32:19.892222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.892420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.892455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.891 [2024-11-19 23:32:19.892469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:33.891 [2024-11-19 23:32:19.892481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.900675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.900721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.891 [2024-11-19 23:32:19.900769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.164 ms 00:19:33.891 [2024-11-19 23:32:19.900777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.904711] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:33.891 [2024-11-19 23:32:19.904912] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:33.891 [2024-11-19 23:32:19.904938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.904947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:33.891 [2024-11-19 23:32:19.904956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.067 ms 00:19:33.891 [2024-11-19 23:32:19.904965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.920710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.920766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:33.891 [2024-11-19 23:32:19.920778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.691 ms 00:19:33.891 [2024-11-19 23:32:19.920792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.924150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.924358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:33.891 [2024-11-19 23:32:19.924379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:19:33.891 [2024-11-19 23:32:19.924388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.927110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.927158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:33.891 [2024-11-19 23:32:19.927169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:19:33.891 [2024-11-19 23:32:19.927177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.927560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.927576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:33.891 [2024-11-19 23:32:19.927587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:33.891 [2024-11-19 23:32:19.927598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.952907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.953113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:33.891 [2024-11-19 23:32:19.953176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.288 ms 00:19:33.891 [2024-11-19 23:32:19.953201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.961927] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:33.891 [2024-11-19 23:32:19.965096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.965236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.891 [2024-11-19 23:32:19.965298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.842 ms 00:19:33.891 [2024-11-19 23:32:19.965323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.965415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.965443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:33.891 [2024-11-19 23:32:19.965455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:33.891 [2024-11-19 23:32:19.965464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.965534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.965550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.891 [2024-11-19 23:32:19.965558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:33.891 [2024-11-19 23:32:19.965572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.965596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.965607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:33.891 [2024-11-19 23:32:19.965616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.891 [2024-11-19 23:32:19.965627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.965661] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:33.891 [2024-11-19 23:32:19.965673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.965681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:33.891 [2024-11-19 23:32:19.965692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:33.891 [2024-11-19 23:32:19.965700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.971562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.971797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:33.891 [2024-11-19 23:32:19.971820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.834 ms 00:19:33.891 [2024-11-19 23:32:19.971830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.972267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.891 [2024-11-19 23:32:19.972311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:33.891 [2024-11-19 23:32:19.972327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:19:33.891 [2024-11-19 23:32:19.972339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.891 [2024-11-19 23:32:19.973994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.871 ms, result 0 00:19:34.839  [2024-11-19T23:32:22.422Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-19T23:32:22.997Z] Copying: 29/1024 [MB] (19 MBps) [2024-11-19T23:32:24.387Z] Copying: 46/1024 [MB] (17 MBps) [2024-11-19T23:32:25.333Z] Copying: 60/1024 [MB] (14 MBps) [2024-11-19T23:32:26.276Z] Copying: 100/1024 [MB] (39 MBps) [2024-11-19T23:32:27.221Z] Copying: 125/1024 [MB] (25 MBps) [2024-11-19T23:32:28.168Z] Copying: 146/1024 [MB] (20 MBps) [2024-11-19T23:32:29.112Z] Copying: 164/1024 [MB] (18 MBps) [2024-11-19T23:32:30.085Z] Copying: 199/1024 [MB] (34 MBps) [2024-11-19T23:32:31.035Z] Copying: 234/1024 [MB] (35 MBps) [2024-11-19T23:32:32.422Z] Copying: 265/1024 [MB] (30 MBps) [2024-11-19T23:32:32.994Z] Copying: 286/1024 [MB] (20 MBps) [2024-11-19T23:32:34.383Z] Copying: 305/1024 [MB] (19 MBps) [2024-11-19T23:32:35.327Z] Copying: 324/1024 [MB] (18 MBps) [2024-11-19T23:32:36.272Z] Copying: 338/1024 [MB] (14 MBps) [2024-11-19T23:32:37.218Z] Copying: 356/1024 [MB] (17 MBps) [2024-11-19T23:32:38.161Z] Copying: 372/1024 [MB] (15 MBps) [2024-11-19T23:32:39.106Z] Copying: 383/1024 [MB] (11 MBps) [2024-11-19T23:32:40.061Z] Copying: 410/1024 [MB] (27 MBps) [2024-11-19T23:32:41.004Z] Copying: 455/1024 [MB] (44 MBps) [2024-11-19T23:32:42.393Z] Copying: 501/1024 [MB] (45 MBps) [2024-11-19T23:32:43.335Z] Copying: 529/1024 [MB] (28 MBps) [2024-11-19T23:32:44.279Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-19T23:32:45.221Z] Copying: 563/1024 [MB] (23 MBps) [2024-11-19T23:32:46.166Z] Copying: 605/1024 [MB] (42 MBps) [2024-11-19T23:32:47.109Z] Copying: 649/1024 [MB] (44 MBps) [2024-11-19T23:32:48.052Z] Copying: 675/1024 [MB] (26 MBps) [2024-11-19T23:32:48.995Z] Copying: 695/1024 [MB] (19 MBps) [2024-11-19T23:32:50.383Z] Copying: 706/1024 [MB] (11 MBps) [2024-11-19T23:32:51.327Z] Copying: 723/1024 [MB] (17 MBps) [2024-11-19T23:32:52.278Z] Copying: 737/1024 [MB] (13 MBps) [2024-11-19T23:32:53.220Z] Copying: 754/1024 [MB] (16 MBps) [2024-11-19T23:32:54.165Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-19T23:32:55.109Z] Copying: 789/1024 [MB] (19 MBps) [2024-11-19T23:32:56.052Z] Copying: 802/1024 [MB] (13 MBps) [2024-11-19T23:32:56.997Z] Copying: 815/1024 [MB] (12 MBps) [2024-11-19T23:32:58.385Z] Copying: 827/1024 [MB] (11 MBps) [2024-11-19T23:32:59.404Z] Copying: 844/1024 [MB] (17 MBps) [2024-11-19T23:33:00.348Z] Copying: 859/1024 [MB] (14 MBps) [2024-11-19T23:33:01.301Z] Copying: 880/1024 [MB] (20 MBps) [2024-11-19T23:33:02.247Z] Copying: 891/1024 [MB] (11 MBps) [2024-11-19T23:33:03.192Z] Copying: 902/1024 [MB] (10 MBps) [2024-11-19T23:33:04.141Z] Copying: 914/1024 [MB] (11 MBps) [2024-11-19T23:33:05.085Z] Copying: 931/1024 [MB] (17 MBps) [2024-11-19T23:33:06.030Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-19T23:33:07.416Z] Copying: 960/1024 [MB] (18 MBps) [2024-11-19T23:33:07.989Z] Copying: 973/1024 [MB] (12 MBps) [2024-11-19T23:33:09.375Z] Copying: 984/1024 [MB] (11 MBps) [2024-11-19T23:33:10.319Z] Copying: 1006/1024 [MB] (21 MBps) [2024-11-19T23:33:10.581Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-19T23:33:10.581Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-19 23:33:10.551820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.389 [2024-11-19 23:33:10.551890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.389 [2024-11-19 23:33:10.551911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:24.389 [2024-11-19 23:33:10.551947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.389 [2024-11-19 23:33:10.555261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.389 [2024-11-19 23:33:10.558778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.389 [2024-11-19 23:33:10.558806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.389 [2024-11-19 23:33:10.558823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:20:24.389 [2024-11-19 23:33:10.558831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.389 [2024-11-19 23:33:10.570377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.389 [2024-11-19 23:33:10.570525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.389 [2024-11-19 23:33:10.570542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.561 ms 00:20:24.389 [2024-11-19 23:33:10.570550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.652 [2024-11-19 23:33:10.597267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.652 [2024-11-19 23:33:10.597436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.652 [2024-11-19 23:33:10.597461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.695 ms 00:20:24.652 [2024-11-19 23:33:10.597470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.652 [2024-11-19 23:33:10.603704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.652 [2024-11-19 23:33:10.603745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.652 [2024-11-19 23:33:10.603757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.190 ms 00:20:24.652 [2024-11-19 23:33:10.603766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.652 [2024-11-19 23:33:10.606407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.652 [2024-11-19 23:33:10.606443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.652 [2024-11-19 23:33:10.606452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.583 ms 00:20:24.652 [2024-11-19 23:33:10.606459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.652 [2024-11-19 23:33:10.610661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.652 [2024-11-19 23:33:10.610710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.652 [2024-11-19 23:33:10.610725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:20:24.652 [2024-11-19 23:33:10.610751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.914 [2024-11-19 23:33:10.910034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.914 [2024-11-19 23:33:10.910287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.914 [2024-11-19 23:33:10.910315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 299.242 ms 00:20:24.914 [2024-11-19 23:33:10.910327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.914 [2024-11-19 23:33:10.914108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.914 [2024-11-19 23:33:10.914159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:24.914 [2024-11-19 23:33:10.914171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.738 ms 00:20:24.914 [2024-11-19 23:33:10.914180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.914 [2024-11-19 23:33:10.917427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.914 [2024-11-19 23:33:10.917611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:24.914 [2024-11-19 23:33:10.917631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:20:24.914 [2024-11-19 23:33:10.917639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.914 [2024-11-19 23:33:10.920599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.914 [2024-11-19 23:33:10.920829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.914 [2024-11-19 23:33:10.920854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:20:24.914 [2024-11-19 23:33:10.920863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.914 [2024-11-19 23:33:10.923301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.914 [2024-11-19 23:33:10.923351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.914 [2024-11-19 23:33:10.923363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.273 ms 00:20:24.914 [2024-11-19 23:33:10.923372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.914 [2024-11-19 23:33:10.923414] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.914 [2024-11-19 23:33:10.923433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103936 / 261120 wr_cnt: 1 state: open 00:20:24.915 [2024-11-19 23:33:10.923446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.923994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.915 [2024-11-19 23:33:10.924202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.916 [2024-11-19 23:33:10.924329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.916 [2024-11-19 23:33:10.924349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c164ffe3-a6bc-4c97-b488-20bc6f7701ea 00:20:24.916 [2024-11-19 23:33:10.924358] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103936 00:20:24.916 [2024-11-19 23:33:10.924377] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104896 00:20:24.916 [2024-11-19 23:33:10.924386] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103936 00:20:24.916 [2024-11-19 23:33:10.924395] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:20:24.916 [2024-11-19 23:33:10.924403] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.916 [2024-11-19 23:33:10.924411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.916 [2024-11-19 23:33:10.924420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.916 [2024-11-19 23:33:10.924427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.916 [2024-11-19 23:33:10.924434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.916 [2024-11-19 23:33:10.924454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.916 [2024-11-19 23:33:10.924462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.916 [2024-11-19 23:33:10.924472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:20:24.916 [2024-11-19 23:33:10.924481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.927897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.916 [2024-11-19 23:33:10.928080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.916 [2024-11-19 23:33:10.928177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.396 ms 00:20:24.916 [2024-11-19 23:33:10.928203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.928443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.916 [2024-11-19 23:33:10.928557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.916 [2024-11-19 23:33:10.928620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:24.916 [2024-11-19 23:33:10.928654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.938931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.939120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.916 [2024-11-19 23:33:10.939183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.939208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.939316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.939343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.916 [2024-11-19 23:33:10.939411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.939442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.939561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.939593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.916 [2024-11-19 23:33:10.939806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.939853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.939903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.939948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.916 [2024-11-19 23:33:10.939970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.940067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.960489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.960696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.916 [2024-11-19 23:33:10.960787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.960814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.976862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.977073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.916 [2024-11-19 23:33:10.977133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.977157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.977247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.977280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.916 [2024-11-19 23:33:10.977302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.977322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.977383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.977829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.916 [2024-11-19 23:33:10.977855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.977877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.977990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.978196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.916 [2024-11-19 23:33:10.978222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.978242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.978304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.978327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:24.916 [2024-11-19 23:33:10.978350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.978370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.978433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.978461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.916 [2024-11-19 23:33:10.978482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.978501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.978572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.916 [2024-11-19 23:33:10.978676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.916 [2024-11-19 23:33:10.978702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.916 [2024-11-19 23:33:10.978722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.916 [2024-11-19 23:33:10.978936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 429.126 ms, result 0 00:20:26.303 00:20:26.303 00:20:26.303 23:33:12 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:26.303 [2024-11-19 23:33:12.235516] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:20:26.303 [2024-11-19 23:33:12.235691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87994 ] 00:20:26.303 [2024-11-19 23:33:12.401851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.303 [2024-11-19 23:33:12.431465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.566 [2024-11-19 23:33:12.548487] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:26.566 [2024-11-19 23:33:12.548560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:26.566 [2024-11-19 23:33:12.710719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.710793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:26.566 [2024-11-19 23:33:12.710809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.566 [2024-11-19 23:33:12.710818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.710887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.710907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.566 [2024-11-19 23:33:12.710916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:26.566 [2024-11-19 23:33:12.710928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.710957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:26.566 [2024-11-19 23:33:12.711253] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:26.566 [2024-11-19 23:33:12.711272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.711281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.566 [2024-11-19 23:33:12.711290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:20:26.566 [2024-11-19 23:33:12.711303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.713074] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:26.566 [2024-11-19 23:33:12.717004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.717061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:26.566 [2024-11-19 23:33:12.717074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:20:26.566 [2024-11-19 23:33:12.717090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.717170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.717181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:26.566 [2024-11-19 23:33:12.717190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:26.566 [2024-11-19 23:33:12.717197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.725721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.725787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.566 [2024-11-19 23:33:12.725801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.471 ms 00:20:26.566 [2024-11-19 23:33:12.725810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.725913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.725923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.566 [2024-11-19 23:33:12.725932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:26.566 [2024-11-19 23:33:12.725941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.726010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.726022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:26.566 [2024-11-19 23:33:12.726032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:26.566 [2024-11-19 23:33:12.726044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.726075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:26.566 [2024-11-19 23:33:12.728182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.728223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.566 [2024-11-19 23:33:12.728235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:20:26.566 [2024-11-19 23:33:12.728244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.728281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.728291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:26.566 [2024-11-19 23:33:12.728299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:26.566 [2024-11-19 23:33:12.728308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.728334] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:26.566 [2024-11-19 23:33:12.728357] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:26.566 [2024-11-19 23:33:12.728393] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:26.566 [2024-11-19 23:33:12.728417] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:26.566 [2024-11-19 23:33:12.728528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:26.566 [2024-11-19 23:33:12.728541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:26.566 [2024-11-19 23:33:12.728552] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:26.566 [2024-11-19 23:33:12.728567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:26.566 [2024-11-19 23:33:12.728586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:26.566 [2024-11-19 23:33:12.728595] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:26.566 [2024-11-19 23:33:12.728603] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:26.566 [2024-11-19 23:33:12.728611] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:26.566 [2024-11-19 23:33:12.728623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:26.566 [2024-11-19 23:33:12.728637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.728645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:26.566 [2024-11-19 23:33:12.728653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:26.566 [2024-11-19 23:33:12.728662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.728801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.566 [2024-11-19 23:33:12.728818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:26.566 [2024-11-19 23:33:12.728827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:26.566 [2024-11-19 23:33:12.728841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.566 [2024-11-19 23:33:12.728945] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:26.566 [2024-11-19 23:33:12.728959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:26.566 [2024-11-19 23:33:12.728969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.566 [2024-11-19 23:33:12.728988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.566 [2024-11-19 23:33:12.728998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:26.566 [2024-11-19 23:33:12.729014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:26.566 [2024-11-19 23:33:12.729024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:26.566 [2024-11-19 23:33:12.729032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:26.566 [2024-11-19 23:33:12.729042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:26.566 [2024-11-19 23:33:12.729051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.566 [2024-11-19 23:33:12.729061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:26.566 [2024-11-19 23:33:12.729069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:26.566 [2024-11-19 23:33:12.729077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.566 [2024-11-19 23:33:12.729085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:26.566 [2024-11-19 23:33:12.729093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:26.566 [2024-11-19 23:33:12.729102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.566 [2024-11-19 23:33:12.729110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:26.566 [2024-11-19 23:33:12.729118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:26.567 [2024-11-19 23:33:12.729149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:26.567 [2024-11-19 23:33:12.729172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:26.567 [2024-11-19 23:33:12.729195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:26.567 [2024-11-19 23:33:12.729216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:26.567 [2024-11-19 23:33:12.729237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.567 [2024-11-19 23:33:12.729251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:26.567 [2024-11-19 23:33:12.729260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:26.567 [2024-11-19 23:33:12.729267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.567 [2024-11-19 23:33:12.729274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:26.567 [2024-11-19 23:33:12.729281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:26.567 [2024-11-19 23:33:12.729287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:26.567 [2024-11-19 23:33:12.729302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:26.567 [2024-11-19 23:33:12.729310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729317] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:26.567 [2024-11-19 23:33:12.729325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:26.567 [2024-11-19 23:33:12.729335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.567 [2024-11-19 23:33:12.729351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:26.567 [2024-11-19 23:33:12.729357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:26.567 [2024-11-19 23:33:12.729363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:26.567 [2024-11-19 23:33:12.729372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:26.567 [2024-11-19 23:33:12.729382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:26.567 [2024-11-19 23:33:12.729390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:26.567 [2024-11-19 23:33:12.729399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:26.567 [2024-11-19 23:33:12.729408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:26.567 [2024-11-19 23:33:12.729430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:26.567 [2024-11-19 23:33:12.729437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:26.567 [2024-11-19 23:33:12.729444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:26.567 [2024-11-19 23:33:12.729451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:26.567 [2024-11-19 23:33:12.729458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:26.567 [2024-11-19 23:33:12.729465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:26.567 [2024-11-19 23:33:12.729474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:26.567 [2024-11-19 23:33:12.729481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:26.567 [2024-11-19 23:33:12.729489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:26.567 [2024-11-19 23:33:12.729527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:26.567 [2024-11-19 23:33:12.729534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:26.567 [2024-11-19 23:33:12.729550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:26.567 [2024-11-19 23:33:12.729558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:26.567 [2024-11-19 23:33:12.729566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:26.567 [2024-11-19 23:33:12.729573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.567 [2024-11-19 23:33:12.729580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:26.567 [2024-11-19 23:33:12.729587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:20:26.567 [2024-11-19 23:33:12.729594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.567 [2024-11-19 23:33:12.745534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.567 [2024-11-19 23:33:12.745786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.567 [2024-11-19 23:33:12.746017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.891 ms 00:20:26.567 [2024-11-19 23:33:12.746052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.567 [2024-11-19 23:33:12.746165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.567 [2024-11-19 23:33:12.747025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:26.567 [2024-11-19 23:33:12.747058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:26.567 [2024-11-19 23:33:12.747068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.769518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.769594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.829 [2024-11-19 23:33:12.769614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.346 ms 00:20:26.829 [2024-11-19 23:33:12.769627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.769693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.769708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.829 [2024-11-19 23:33:12.769722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.829 [2024-11-19 23:33:12.769774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.770412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.770472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.829 [2024-11-19 23:33:12.770490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:26.829 [2024-11-19 23:33:12.770505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.770728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.770777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.829 [2024-11-19 23:33:12.770802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:20:26.829 [2024-11-19 23:33:12.770815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.780169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.780445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.829 [2024-11-19 23:33:12.780488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.320 ms 00:20:26.829 [2024-11-19 23:33:12.780497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.784498] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:26.829 [2024-11-19 23:33:12.784555] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:26.829 [2024-11-19 23:33:12.784569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.784578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:26.829 [2024-11-19 23:33:12.784587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:20:26.829 [2024-11-19 23:33:12.784596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.801173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.801303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:26.829 [2024-11-19 23:33:12.801316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.518 ms 00:20:26.829 [2024-11-19 23:33:12.801325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.804573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.804625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:26.829 [2024-11-19 23:33:12.804646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:20:26.829 [2024-11-19 23:33:12.804653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.807484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.807682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:26.829 [2024-11-19 23:33:12.807701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.780 ms 00:20:26.829 [2024-11-19 23:33:12.807709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.808090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.808117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:26.829 [2024-11-19 23:33:12.808129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:26.829 [2024-11-19 23:33:12.808138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.834402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.834631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:26.829 [2024-11-19 23:33:12.834652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.230 ms 00:20:26.829 [2024-11-19 23:33:12.834663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.843181] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:26.829 [2024-11-19 23:33:12.846330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.846381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:26.829 [2024-11-19 23:33:12.846393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.623 ms 00:20:26.829 [2024-11-19 23:33:12.846401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.846485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.846497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:26.829 [2024-11-19 23:33:12.846507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:26.829 [2024-11-19 23:33:12.846521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.848272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.848321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:26.829 [2024-11-19 23:33:12.848338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:20:26.829 [2024-11-19 23:33:12.848351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.848381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.848391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:26.829 [2024-11-19 23:33:12.848400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:26.829 [2024-11-19 23:33:12.848413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.848454] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:26.829 [2024-11-19 23:33:12.848465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.848474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:26.829 [2024-11-19 23:33:12.848488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:26.829 [2024-11-19 23:33:12.848499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.854168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.854219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:26.829 [2024-11-19 23:33:12.854240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.650 ms 00:20:26.829 [2024-11-19 23:33:12.854249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.854336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.829 [2024-11-19 23:33:12.854347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:26.829 [2024-11-19 23:33:12.854361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:26.829 [2024-11-19 23:33:12.854370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.829 [2024-11-19 23:33:12.855549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.365 ms, result 0 00:20:28.216  [2024-11-19T23:33:15.352Z] Copying: 8496/1048576 [kB] (8496 kBps) [2024-11-19T23:33:16.294Z] Copying: 27/1024 [MB] (19 MBps) [2024-11-19T23:33:17.239Z] Copying: 52/1024 [MB] (25 MBps) [2024-11-19T23:33:18.182Z] Copying: 73/1024 [MB] (20 MBps) [2024-11-19T23:33:19.126Z] Copying: 96/1024 [MB] (23 MBps) [2024-11-19T23:33:20.068Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-19T23:33:21.456Z] Copying: 144/1024 [MB] (23 MBps) [2024-11-19T23:33:22.399Z] Copying: 165/1024 [MB] (21 MBps) [2024-11-19T23:33:23.344Z] Copying: 189/1024 [MB] (23 MBps) [2024-11-19T23:33:24.286Z] Copying: 209/1024 [MB] (19 MBps) [2024-11-19T23:33:25.230Z] Copying: 224/1024 [MB] (15 MBps) [2024-11-19T23:33:26.174Z] Copying: 234/1024 [MB] (10 MBps) [2024-11-19T23:33:27.116Z] Copying: 252/1024 [MB] (17 MBps) [2024-11-19T23:33:28.136Z] Copying: 265/1024 [MB] (13 MBps) [2024-11-19T23:33:29.081Z] Copying: 276/1024 [MB] (10 MBps) [2024-11-19T23:33:30.469Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-19T23:33:31.412Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-19T23:33:32.357Z] Copying: 309/1024 [MB] (11 MBps) [2024-11-19T23:33:33.301Z] Copying: 320/1024 [MB] (10 MBps) [2024-11-19T23:33:34.244Z] Copying: 331/1024 [MB] (10 MBps) [2024-11-19T23:33:35.187Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-19T23:33:36.131Z] Copying: 354/1024 [MB] (12 MBps) [2024-11-19T23:33:37.076Z] Copying: 365/1024 [MB] (11 MBps) [2024-11-19T23:33:38.464Z] Copying: 380/1024 [MB] (14 MBps) [2024-11-19T23:33:39.407Z] Copying: 391/1024 [MB] (11 MBps) [2024-11-19T23:33:40.351Z] Copying: 404/1024 [MB] (13 MBps) [2024-11-19T23:33:41.295Z] Copying: 418/1024 [MB] (13 MBps) [2024-11-19T23:33:42.239Z] Copying: 431/1024 [MB] (13 MBps) [2024-11-19T23:33:43.183Z] Copying: 442/1024 [MB] (10 MBps) [2024-11-19T23:33:44.129Z] Copying: 456/1024 [MB] (13 MBps) [2024-11-19T23:33:45.073Z] Copying: 473/1024 [MB] (17 MBps) [2024-11-19T23:33:46.044Z] Copying: 492/1024 [MB] (18 MBps) [2024-11-19T23:33:47.430Z] Copying: 510/1024 [MB] (18 MBps) [2024-11-19T23:33:48.387Z] Copying: 526/1024 [MB] (15 MBps) [2024-11-19T23:33:49.337Z] Copying: 543/1024 [MB] (17 MBps) [2024-11-19T23:33:50.280Z] Copying: 565/1024 [MB] (22 MBps) [2024-11-19T23:33:51.224Z] Copying: 579/1024 [MB] (13 MBps) [2024-11-19T23:33:52.167Z] Copying: 600/1024 [MB] (20 MBps) [2024-11-19T23:33:53.111Z] Copying: 616/1024 [MB] (16 MBps) [2024-11-19T23:33:54.054Z] Copying: 631/1024 [MB] (14 MBps) [2024-11-19T23:33:55.444Z] Copying: 650/1024 [MB] (18 MBps) [2024-11-19T23:33:56.402Z] Copying: 668/1024 [MB] (18 MBps) [2024-11-19T23:33:57.382Z] Copying: 687/1024 [MB] (19 MBps) [2024-11-19T23:33:58.323Z] Copying: 703/1024 [MB] (15 MBps) [2024-11-19T23:33:59.272Z] Copying: 713/1024 [MB] (10 MBps) [2024-11-19T23:34:00.214Z] Copying: 724/1024 [MB] (10 MBps) [2024-11-19T23:34:01.157Z] Copying: 734/1024 [MB] (10 MBps) [2024-11-19T23:34:02.101Z] Copying: 746/1024 [MB] (11 MBps) [2024-11-19T23:34:03.047Z] Copying: 763/1024 [MB] (17 MBps) [2024-11-19T23:34:04.435Z] Copying: 774/1024 [MB] (11 MBps) [2024-11-19T23:34:05.380Z] Copying: 785/1024 [MB] (10 MBps) [2024-11-19T23:34:06.324Z] Copying: 804/1024 [MB] (19 MBps) [2024-11-19T23:34:07.268Z] Copying: 816/1024 [MB] (11 MBps) [2024-11-19T23:34:08.213Z] Copying: 828/1024 [MB] (11 MBps) [2024-11-19T23:34:09.158Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-19T23:34:10.102Z] Copying: 854/1024 [MB] (14 MBps) [2024-11-19T23:34:11.047Z] Copying: 868/1024 [MB] (14 MBps) [2024-11-19T23:34:12.434Z] Copying: 886/1024 [MB] (17 MBps) [2024-11-19T23:34:13.381Z] Copying: 900/1024 [MB] (14 MBps) [2024-11-19T23:34:14.325Z] Copying: 919/1024 [MB] (18 MBps) [2024-11-19T23:34:15.269Z] Copying: 942/1024 [MB] (22 MBps) [2024-11-19T23:34:16.212Z] Copying: 964/1024 [MB] (21 MBps) [2024-11-19T23:34:17.156Z] Copying: 985/1024 [MB] (21 MBps) [2024-11-19T23:34:18.098Z] Copying: 1000/1024 [MB] (15 MBps) [2024-11-19T23:34:18.665Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-19 23:34:18.606597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.473 [2024-11-19 23:34:18.606683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:32.473 [2024-11-19 23:34:18.606700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:32.473 [2024-11-19 23:34:18.606710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.473 [2024-11-19 23:34:18.606750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:32.473 [2024-11-19 23:34:18.607477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.473 [2024-11-19 23:34:18.607510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:32.473 [2024-11-19 23:34:18.607523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:21:32.474 [2024-11-19 23:34:18.607536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.474 [2024-11-19 23:34:18.607799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.474 [2024-11-19 23:34:18.607817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:32.474 [2024-11-19 23:34:18.607827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:21:32.474 [2024-11-19 23:34:18.607837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.474 [2024-11-19 23:34:18.613747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.474 [2024-11-19 23:34:18.613801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:32.474 [2024-11-19 23:34:18.613814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.889 ms 00:21:32.474 [2024-11-19 23:34:18.613822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.474 [2024-11-19 23:34:18.620084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.474 [2024-11-19 23:34:18.620132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:32.474 [2024-11-19 23:34:18.620153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:21:32.474 [2024-11-19 23:34:18.620162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.474 [2024-11-19 23:34:18.623943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.474 [2024-11-19 23:34:18.623995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:32.474 [2024-11-19 23:34:18.624007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:21:32.474 [2024-11-19 23:34:18.624015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.474 [2024-11-19 23:34:18.629669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.474 [2024-11-19 23:34:18.629990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:32.474 [2024-11-19 23:34:18.630012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.606 ms 00:21:32.474 [2024-11-19 23:34:18.630029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.734 [2024-11-19 23:34:18.881869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.734 [2024-11-19 23:34:18.882065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:32.734 [2024-11-19 23:34:18.882151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 251.782 ms 00:21:32.734 [2024-11-19 23:34:18.882182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.734 [2024-11-19 23:34:18.885180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.734 [2024-11-19 23:34:18.885365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:32.734 [2024-11-19 23:34:18.885429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:21:32.734 [2024-11-19 23:34:18.885453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.734 [2024-11-19 23:34:18.887532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.734 [2024-11-19 23:34:18.887696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:32.735 [2024-11-19 23:34:18.887785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:21:32.735 [2024-11-19 23:34:18.887811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.889579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.735 [2024-11-19 23:34:18.889773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:32.735 [2024-11-19 23:34:18.889848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:21:32.735 [2024-11-19 23:34:18.889871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.891606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.735 [2024-11-19 23:34:18.891794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:32.735 [2024-11-19 23:34:18.891860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:21:32.735 [2024-11-19 23:34:18.891883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.891941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:32.735 [2024-11-19 23:34:18.891971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:32.735 [2024-11-19 23:34:18.892148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.892986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.893965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.894973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:32.735 [2024-11-19 23:34:18.895787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:32.735 [2024-11-19 23:34:18.895799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c164ffe3-a6bc-4c97-b488-20bc6f7701ea 00:21:32.735 [2024-11-19 23:34:18.895808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:32.735 [2024-11-19 23:34:18.895818] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 28096 00:21:32.735 [2024-11-19 23:34:18.895837] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 27136 00:21:32.735 [2024-11-19 23:34:18.895849] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0354 00:21:32.735 [2024-11-19 23:34:18.895859] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:32.735 [2024-11-19 23:34:18.895868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:32.735 [2024-11-19 23:34:18.895876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:32.735 [2024-11-19 23:34:18.895884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:32.735 [2024-11-19 23:34:18.895891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:32.735 [2024-11-19 23:34:18.895901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.735 [2024-11-19 23:34:18.895931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:32.735 [2024-11-19 23:34:18.895941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:21:32.735 [2024-11-19 23:34:18.895949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.898457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.735 [2024-11-19 23:34:18.898605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:32.735 [2024-11-19 23:34:18.898662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:21:32.735 [2024-11-19 23:34:18.898691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.898879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.735 [2024-11-19 23:34:18.898961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:32.735 [2024-11-19 23:34:18.899009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:32.735 [2024-11-19 23:34:18.899031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.906520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.735 [2024-11-19 23:34:18.906682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:32.735 [2024-11-19 23:34:18.906776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.735 [2024-11-19 23:34:18.906800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.906903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.735 [2024-11-19 23:34:18.906961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:32.735 [2024-11-19 23:34:18.907008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.735 [2024-11-19 23:34:18.907041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.907202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.735 [2024-11-19 23:34:18.907284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:32.735 [2024-11-19 23:34:18.907309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.735 [2024-11-19 23:34:18.907359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.907399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.735 [2024-11-19 23:34:18.907474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:32.735 [2024-11-19 23:34:18.907522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.735 [2024-11-19 23:34:18.907545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.735 [2024-11-19 23:34:18.920916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.735 [2024-11-19 23:34:18.921097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:32.735 [2024-11-19 23:34:18.921156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.735 [2024-11-19 23:34:18.921179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.995 [2024-11-19 23:34:18.931165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.995 [2024-11-19 23:34:18.931344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:32.995 [2024-11-19 23:34:18.931400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.995 [2024-11-19 23:34:18.931424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.995 [2024-11-19 23:34:18.931489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.995 [2024-11-19 23:34:18.931522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.995 [2024-11-19 23:34:18.931543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.995 [2024-11-19 23:34:18.931596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.995 [2024-11-19 23:34:18.931648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.995 [2024-11-19 23:34:18.931670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.995 [2024-11-19 23:34:18.931691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.995 [2024-11-19 23:34:18.931766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.996 [2024-11-19 23:34:18.931865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.996 [2024-11-19 23:34:18.932036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.996 [2024-11-19 23:34:18.932068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.996 [2024-11-19 23:34:18.932089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.996 [2024-11-19 23:34:18.932139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.996 [2024-11-19 23:34:18.932165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:32.996 [2024-11-19 23:34:18.932185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.996 [2024-11-19 23:34:18.932204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.996 [2024-11-19 23:34:18.932255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.996 [2024-11-19 23:34:18.932357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.996 [2024-11-19 23:34:18.932381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.996 [2024-11-19 23:34:18.932400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.996 [2024-11-19 23:34:18.932457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:32.996 [2024-11-19 23:34:18.932533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.996 [2024-11-19 23:34:18.932557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:32.996 [2024-11-19 23:34:18.932576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.996 [2024-11-19 23:34:18.932750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 326.095 ms, result 0 00:21:32.996 00:21:32.996 00:21:32.996 23:34:19 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:35.539 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 85887 00:21:35.539 Process with pid 85887 is not found 00:21:35.539 Remove shared memory files 00:21:35.539 23:34:21 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 85887 ']' 00:21:35.539 23:34:21 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 85887 00:21:35.539 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85887) - No such process 00:21:35.539 23:34:21 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 85887 is not found' 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:35.539 23:34:21 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:35.539 00:21:35.539 real 4m31.747s 00:21:35.539 user 4m18.327s 00:21:35.539 sys 0m12.831s 00:21:35.539 ************************************ 00:21:35.539 END TEST ftl_restore 00:21:35.539 ************************************ 00:21:35.539 23:34:21 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.539 23:34:21 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 23:34:21 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:35.539 23:34:21 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:35.539 23:34:21 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.539 23:34:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:35.539 ************************************ 00:21:35.539 START TEST ftl_dirty_shutdown 00:21:35.539 ************************************ 00:21:35.539 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:35.539 * Looking for test storage... 00:21:35.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:35.539 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:35.539 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:35.539 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.800 --rc genhtml_branch_coverage=1 00:21:35.800 --rc genhtml_function_coverage=1 00:21:35.800 --rc genhtml_legend=1 00:21:35.800 --rc geninfo_all_blocks=1 00:21:35.800 --rc geninfo_unexecuted_blocks=1 00:21:35.800 00:21:35.800 ' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.800 --rc genhtml_branch_coverage=1 00:21:35.800 --rc genhtml_function_coverage=1 00:21:35.800 --rc genhtml_legend=1 00:21:35.800 --rc geninfo_all_blocks=1 00:21:35.800 --rc geninfo_unexecuted_blocks=1 00:21:35.800 00:21:35.800 ' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.800 --rc genhtml_branch_coverage=1 00:21:35.800 --rc genhtml_function_coverage=1 00:21:35.800 --rc genhtml_legend=1 00:21:35.800 --rc geninfo_all_blocks=1 00:21:35.800 --rc geninfo_unexecuted_blocks=1 00:21:35.800 00:21:35.800 ' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.800 --rc genhtml_branch_coverage=1 00:21:35.800 --rc genhtml_function_coverage=1 00:21:35.800 --rc genhtml_legend=1 00:21:35.800 --rc geninfo_all_blocks=1 00:21:35.800 --rc geninfo_unexecuted_blocks=1 00:21:35.800 00:21:35.800 ' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.800 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88777 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88777 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 88777 ']' 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:35.801 23:34:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:35.801 [2024-11-19 23:34:21.881413] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:21:35.801 [2024-11-19 23:34:21.881560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88777 ] 00:21:36.060 [2024-11-19 23:34:22.041619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.060 [2024-11-19 23:34:22.070455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:36.632 23:34:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:21:36.893 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:37.155 { 00:21:37.155 "name": "nvme0n1", 00:21:37.155 "aliases": [ 00:21:37.155 "cd6316e6-8152-44bd-80f4-2af2ad2534eb" 00:21:37.155 ], 00:21:37.155 "product_name": "NVMe disk", 00:21:37.155 "block_size": 4096, 00:21:37.155 "num_blocks": 1310720, 00:21:37.155 "uuid": "cd6316e6-8152-44bd-80f4-2af2ad2534eb", 00:21:37.155 "numa_id": -1, 00:21:37.155 "assigned_rate_limits": { 00:21:37.155 "rw_ios_per_sec": 0, 00:21:37.155 "rw_mbytes_per_sec": 0, 00:21:37.155 "r_mbytes_per_sec": 0, 00:21:37.155 "w_mbytes_per_sec": 0 00:21:37.155 }, 00:21:37.155 "claimed": true, 00:21:37.155 "claim_type": "read_many_write_one", 00:21:37.155 "zoned": false, 00:21:37.155 "supported_io_types": { 00:21:37.155 "read": true, 00:21:37.155 "write": true, 00:21:37.155 "unmap": true, 00:21:37.155 "flush": true, 00:21:37.155 "reset": true, 00:21:37.155 "nvme_admin": true, 00:21:37.155 "nvme_io": true, 00:21:37.155 "nvme_io_md": false, 00:21:37.155 "write_zeroes": true, 00:21:37.155 "zcopy": false, 00:21:37.155 "get_zone_info": false, 00:21:37.155 "zone_management": false, 00:21:37.155 "zone_append": false, 00:21:37.155 "compare": true, 00:21:37.155 "compare_and_write": false, 00:21:37.155 "abort": true, 00:21:37.155 "seek_hole": false, 00:21:37.155 "seek_data": false, 00:21:37.155 "copy": true, 00:21:37.155 "nvme_iov_md": false 00:21:37.155 }, 00:21:37.155 "driver_specific": { 00:21:37.155 "nvme": [ 00:21:37.155 { 00:21:37.155 "pci_address": "0000:00:11.0", 00:21:37.155 "trid": { 00:21:37.155 "trtype": "PCIe", 00:21:37.155 "traddr": "0000:00:11.0" 00:21:37.155 }, 00:21:37.155 "ctrlr_data": { 00:21:37.155 "cntlid": 0, 00:21:37.155 "vendor_id": "0x1b36", 00:21:37.155 "model_number": "QEMU NVMe Ctrl", 00:21:37.155 "serial_number": "12341", 00:21:37.155 "firmware_revision": "8.0.0", 00:21:37.155 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:37.155 "oacs": { 00:21:37.155 "security": 0, 00:21:37.155 "format": 1, 00:21:37.155 "firmware": 0, 00:21:37.155 "ns_manage": 1 00:21:37.155 }, 00:21:37.155 "multi_ctrlr": false, 00:21:37.155 "ana_reporting": false 00:21:37.155 }, 00:21:37.155 "vs": { 00:21:37.155 "nvme_version": "1.4" 00:21:37.155 }, 00:21:37.155 "ns_data": { 00:21:37.155 "id": 1, 00:21:37.155 "can_share": false 00:21:37.155 } 00:21:37.155 } 00:21:37.155 ], 00:21:37.155 "mp_policy": "active_passive" 00:21:37.155 } 00:21:37.155 } 00:21:37.155 ]' 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:37.155 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:37.416 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=23b36f27-eb54-4f10-b337-a75cbb16fb17 00:21:37.416 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:37.416 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23b36f27-eb54-4f10-b337-a75cbb16fb17 00:21:37.677 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:37.937 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0c508cf9-24bb-4105-a298-100ec769327c 00:21:37.937 23:34:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c508cf9-24bb-4105-a298-100ec769327c 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:21:38.198 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:38.458 { 00:21:38.458 "name": "db64d24f-549f-41cc-8af6-e805e5706217", 00:21:38.458 "aliases": [ 00:21:38.458 "lvs/nvme0n1p0" 00:21:38.458 ], 00:21:38.458 "product_name": "Logical Volume", 00:21:38.458 "block_size": 4096, 00:21:38.458 "num_blocks": 26476544, 00:21:38.458 "uuid": "db64d24f-549f-41cc-8af6-e805e5706217", 00:21:38.458 "assigned_rate_limits": { 00:21:38.458 "rw_ios_per_sec": 0, 00:21:38.458 "rw_mbytes_per_sec": 0, 00:21:38.458 "r_mbytes_per_sec": 0, 00:21:38.458 "w_mbytes_per_sec": 0 00:21:38.458 }, 00:21:38.458 "claimed": false, 00:21:38.458 "zoned": false, 00:21:38.458 "supported_io_types": { 00:21:38.458 "read": true, 00:21:38.458 "write": true, 00:21:38.458 "unmap": true, 00:21:38.458 "flush": false, 00:21:38.458 "reset": true, 00:21:38.458 "nvme_admin": false, 00:21:38.458 "nvme_io": false, 00:21:38.458 "nvme_io_md": false, 00:21:38.458 "write_zeroes": true, 00:21:38.458 "zcopy": false, 00:21:38.458 "get_zone_info": false, 00:21:38.458 "zone_management": false, 00:21:38.458 "zone_append": false, 00:21:38.458 "compare": false, 00:21:38.458 "compare_and_write": false, 00:21:38.458 "abort": false, 00:21:38.458 "seek_hole": true, 00:21:38.458 "seek_data": true, 00:21:38.458 "copy": false, 00:21:38.458 "nvme_iov_md": false 00:21:38.458 }, 00:21:38.458 "driver_specific": { 00:21:38.458 "lvol": { 00:21:38.458 "lvol_store_uuid": "0c508cf9-24bb-4105-a298-100ec769327c", 00:21:38.458 "base_bdev": "nvme0n1", 00:21:38.458 "thin_provision": true, 00:21:38.458 "num_allocated_clusters": 0, 00:21:38.458 "snapshot": false, 00:21:38.458 "clone": false, 00:21:38.458 "esnap_clone": false 00:21:38.458 } 00:21:38.458 } 00:21:38.458 } 00:21:38.458 ]' 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:38.458 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=db64d24f-549f-41cc-8af6-e805e5706217 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:21:38.752 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db64d24f-549f-41cc-8af6-e805e5706217 00:21:39.043 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:39.043 { 00:21:39.043 "name": "db64d24f-549f-41cc-8af6-e805e5706217", 00:21:39.043 "aliases": [ 00:21:39.043 "lvs/nvme0n1p0" 00:21:39.043 ], 00:21:39.043 "product_name": "Logical Volume", 00:21:39.043 "block_size": 4096, 00:21:39.043 "num_blocks": 26476544, 00:21:39.043 "uuid": "db64d24f-549f-41cc-8af6-e805e5706217", 00:21:39.043 "assigned_rate_limits": { 00:21:39.043 "rw_ios_per_sec": 0, 00:21:39.043 "rw_mbytes_per_sec": 0, 00:21:39.043 "r_mbytes_per_sec": 0, 00:21:39.043 "w_mbytes_per_sec": 0 00:21:39.043 }, 00:21:39.043 "claimed": false, 00:21:39.044 "zoned": false, 00:21:39.044 "supported_io_types": { 00:21:39.044 "read": true, 00:21:39.044 "write": true, 00:21:39.044 "unmap": true, 00:21:39.044 "flush": false, 00:21:39.044 "reset": true, 00:21:39.044 "nvme_admin": false, 00:21:39.044 "nvme_io": false, 00:21:39.044 "nvme_io_md": false, 00:21:39.044 "write_zeroes": true, 00:21:39.044 "zcopy": false, 00:21:39.044 "get_zone_info": false, 00:21:39.044 "zone_management": false, 00:21:39.044 "zone_append": false, 00:21:39.044 "compare": false, 00:21:39.044 "compare_and_write": false, 00:21:39.044 "abort": false, 00:21:39.044 "seek_hole": true, 00:21:39.044 "seek_data": true, 00:21:39.044 "copy": false, 00:21:39.044 "nvme_iov_md": false 00:21:39.044 }, 00:21:39.044 "driver_specific": { 00:21:39.044 "lvol": { 00:21:39.044 "lvol_store_uuid": "0c508cf9-24bb-4105-a298-100ec769327c", 00:21:39.044 "base_bdev": "nvme0n1", 00:21:39.044 "thin_provision": true, 00:21:39.044 "num_allocated_clusters": 0, 00:21:39.044 "snapshot": false, 00:21:39.044 "clone": false, 00:21:39.044 "esnap_clone": false 00:21:39.044 } 00:21:39.044 } 00:21:39.044 } 00:21:39.044 ]' 00:21:39.044 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:39.044 23:34:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:21:39.044 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:39.044 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:39.044 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:39.044 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:21:39.044 23:34:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:39.044 23:34:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size db64d24f-549f-41cc-8af6-e805e5706217 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=db64d24f-549f-41cc-8af6-e805e5706217 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db64d24f-549f-41cc-8af6-e805e5706217 00:21:39.308 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:39.308 { 00:21:39.308 "name": "db64d24f-549f-41cc-8af6-e805e5706217", 00:21:39.308 "aliases": [ 00:21:39.308 "lvs/nvme0n1p0" 00:21:39.308 ], 00:21:39.308 "product_name": "Logical Volume", 00:21:39.308 "block_size": 4096, 00:21:39.308 "num_blocks": 26476544, 00:21:39.308 "uuid": "db64d24f-549f-41cc-8af6-e805e5706217", 00:21:39.308 "assigned_rate_limits": { 00:21:39.308 "rw_ios_per_sec": 0, 00:21:39.308 "rw_mbytes_per_sec": 0, 00:21:39.308 "r_mbytes_per_sec": 0, 00:21:39.308 "w_mbytes_per_sec": 0 00:21:39.308 }, 00:21:39.308 "claimed": false, 00:21:39.309 "zoned": false, 00:21:39.309 "supported_io_types": { 00:21:39.309 "read": true, 00:21:39.309 "write": true, 00:21:39.309 "unmap": true, 00:21:39.309 "flush": false, 00:21:39.309 "reset": true, 00:21:39.309 "nvme_admin": false, 00:21:39.309 "nvme_io": false, 00:21:39.309 "nvme_io_md": false, 00:21:39.309 "write_zeroes": true, 00:21:39.309 "zcopy": false, 00:21:39.309 "get_zone_info": false, 00:21:39.309 "zone_management": false, 00:21:39.309 "zone_append": false, 00:21:39.309 "compare": false, 00:21:39.309 "compare_and_write": false, 00:21:39.309 "abort": false, 00:21:39.309 "seek_hole": true, 00:21:39.309 "seek_data": true, 00:21:39.309 "copy": false, 00:21:39.309 "nvme_iov_md": false 00:21:39.309 }, 00:21:39.309 "driver_specific": { 00:21:39.309 "lvol": { 00:21:39.309 "lvol_store_uuid": "0c508cf9-24bb-4105-a298-100ec769327c", 00:21:39.309 "base_bdev": "nvme0n1", 00:21:39.309 "thin_provision": true, 00:21:39.309 "num_allocated_clusters": 0, 00:21:39.309 "snapshot": false, 00:21:39.309 "clone": false, 00:21:39.309 "esnap_clone": false 00:21:39.309 } 00:21:39.309 } 00:21:39.309 } 00:21:39.309 ]' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d db64d24f-549f-41cc-8af6-e805e5706217 --l2p_dram_limit 10' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:39.309 23:34:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d db64d24f-549f-41cc-8af6-e805e5706217 --l2p_dram_limit 10 -c nvc0n1p0 00:21:39.570 [2024-11-19 23:34:25.671780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.671817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:39.570 [2024-11-19 23:34:25.671828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:39.570 [2024-11-19 23:34:25.671835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.570 [2024-11-19 23:34:25.671882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.671891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.570 [2024-11-19 23:34:25.671901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:39.570 [2024-11-19 23:34:25.671917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.570 [2024-11-19 23:34:25.671935] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:39.570 [2024-11-19 23:34:25.672139] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:39.570 [2024-11-19 23:34:25.672154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.672162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.570 [2024-11-19 23:34:25.672168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:21:39.570 [2024-11-19 23:34:25.672175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.570 [2024-11-19 23:34:25.672199] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2df5abd4-1489-4fe8-8391-19c1c8098aee 00:21:39.570 [2024-11-19 23:34:25.673170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.673278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:39.570 [2024-11-19 23:34:25.673295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:39.570 [2024-11-19 23:34:25.673301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.570 [2024-11-19 23:34:25.678077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.678103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.570 [2024-11-19 23:34:25.678112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:21:39.570 [2024-11-19 23:34:25.678123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.570 [2024-11-19 23:34:25.678182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.678190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.570 [2024-11-19 23:34:25.678197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:39.570 [2024-11-19 23:34:25.678203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.570 [2024-11-19 23:34:25.678246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.570 [2024-11-19 23:34:25.678254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:39.570 [2024-11-19 23:34:25.678262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:39.571 [2024-11-19 23:34:25.678268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.571 [2024-11-19 23:34:25.678287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:39.571 [2024-11-19 23:34:25.679555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.571 [2024-11-19 23:34:25.679581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.571 [2024-11-19 23:34:25.679589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:21:39.571 [2024-11-19 23:34:25.679596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.571 [2024-11-19 23:34:25.679621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.571 [2024-11-19 23:34:25.679629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:39.571 [2024-11-19 23:34:25.679635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:39.571 [2024-11-19 23:34:25.679647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.571 [2024-11-19 23:34:25.679659] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:39.571 [2024-11-19 23:34:25.679778] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:39.571 [2024-11-19 23:34:25.679788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:39.571 [2024-11-19 23:34:25.679803] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:39.571 [2024-11-19 23:34:25.679811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:39.571 [2024-11-19 23:34:25.679821] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:39.571 [2024-11-19 23:34:25.679827] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:39.571 [2024-11-19 23:34:25.679838] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:39.571 [2024-11-19 23:34:25.679844] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:39.571 [2024-11-19 23:34:25.679851] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:39.571 [2024-11-19 23:34:25.679857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.571 [2024-11-19 23:34:25.679864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:39.571 [2024-11-19 23:34:25.679870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:21:39.571 [2024-11-19 23:34:25.679876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.571 [2024-11-19 23:34:25.679948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.571 [2024-11-19 23:34:25.679958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:39.571 [2024-11-19 23:34:25.679964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:39.571 [2024-11-19 23:34:25.679971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.571 [2024-11-19 23:34:25.680051] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:39.571 [2024-11-19 23:34:25.680061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:39.571 [2024-11-19 23:34:25.680067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:39.571 [2024-11-19 23:34:25.680085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:39.571 [2024-11-19 23:34:25.680103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.571 [2024-11-19 23:34:25.680115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:39.571 [2024-11-19 23:34:25.680122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:39.571 [2024-11-19 23:34:25.680127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.571 [2024-11-19 23:34:25.680136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:39.571 [2024-11-19 23:34:25.680142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:39.571 [2024-11-19 23:34:25.680149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:39.571 [2024-11-19 23:34:25.680161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:39.571 [2024-11-19 23:34:25.680177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:39.571 [2024-11-19 23:34:25.680195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:39.571 [2024-11-19 23:34:25.680211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:39.571 [2024-11-19 23:34:25.680233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:39.571 [2024-11-19 23:34:25.680252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.571 [2024-11-19 23:34:25.680265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:39.571 [2024-11-19 23:34:25.680272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:39.571 [2024-11-19 23:34:25.680278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.571 [2024-11-19 23:34:25.680285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:39.571 [2024-11-19 23:34:25.680291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:39.571 [2024-11-19 23:34:25.680299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:39.571 [2024-11-19 23:34:25.680312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:39.571 [2024-11-19 23:34:25.680318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680324] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:39.571 [2024-11-19 23:34:25.680330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:39.571 [2024-11-19 23:34:25.680339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.571 [2024-11-19 23:34:25.680353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:39.571 [2024-11-19 23:34:25.680359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:39.571 [2024-11-19 23:34:25.680366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:39.571 [2024-11-19 23:34:25.680372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:39.571 [2024-11-19 23:34:25.680379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:39.571 [2024-11-19 23:34:25.680384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:39.571 [2024-11-19 23:34:25.680394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:39.571 [2024-11-19 23:34:25.680403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.571 [2024-11-19 23:34:25.680412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:39.571 [2024-11-19 23:34:25.680418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:39.571 [2024-11-19 23:34:25.680425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:39.571 [2024-11-19 23:34:25.680432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:39.571 [2024-11-19 23:34:25.680439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:39.571 [2024-11-19 23:34:25.680445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:39.571 [2024-11-19 23:34:25.680455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:39.571 [2024-11-19 23:34:25.680462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:39.571 [2024-11-19 23:34:25.680469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:39.571 [2024-11-19 23:34:25.680475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:39.571 [2024-11-19 23:34:25.680481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:39.571 [2024-11-19 23:34:25.680486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:39.571 [2024-11-19 23:34:25.680492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:39.571 [2024-11-19 23:34:25.680497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:39.571 [2024-11-19 23:34:25.680506] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:39.571 [2024-11-19 23:34:25.680512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.571 [2024-11-19 23:34:25.680519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:39.572 [2024-11-19 23:34:25.680524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:39.572 [2024-11-19 23:34:25.680531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:39.572 [2024-11-19 23:34:25.680537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:39.572 [2024-11-19 23:34:25.680543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.572 [2024-11-19 23:34:25.680549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:39.572 [2024-11-19 23:34:25.680558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:21:39.572 [2024-11-19 23:34:25.680563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.572 [2024-11-19 23:34:25.680591] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:39.572 [2024-11-19 23:34:25.680598] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:43.780 [2024-11-19 23:34:29.876364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.876460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:43.780 [2024-11-19 23:34:29.876480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4195.745 ms 00:21:43.780 [2024-11-19 23:34:29.876490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.890946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.891186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.780 [2024-11-19 23:34:29.891222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.326 ms 00:21:43.780 [2024-11-19 23:34:29.891233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.891385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.891398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.780 [2024-11-19 23:34:29.891414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:43.780 [2024-11-19 23:34:29.891422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.904068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.904119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.780 [2024-11-19 23:34:29.904133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.586 ms 00:21:43.780 [2024-11-19 23:34:29.904142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.904183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.904192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.780 [2024-11-19 23:34:29.904205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.780 [2024-11-19 23:34:29.904212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.904723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.904778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.780 [2024-11-19 23:34:29.904794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:21:43.780 [2024-11-19 23:34:29.904804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.904931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.904946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.780 [2024-11-19 23:34:29.904958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:43.780 [2024-11-19 23:34:29.904969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.913330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.913377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.780 [2024-11-19 23:34:29.913390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.334 ms 00:21:43.780 [2024-11-19 23:34:29.913399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.780 [2024-11-19 23:34:29.923121] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:43.780 [2024-11-19 23:34:29.926892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.780 [2024-11-19 23:34:29.926940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.780 [2024-11-19 23:34:29.926953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.423 ms 00:21:43.780 [2024-11-19 23:34:29.926963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.040997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.041076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:44.042 [2024-11-19 23:34:30.041097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.001 ms 00:21:44.042 [2024-11-19 23:34:30.041112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.041327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.041344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:44.042 [2024-11-19 23:34:30.041353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:21:44.042 [2024-11-19 23:34:30.041364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.047784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.048097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:44.042 [2024-11-19 23:34:30.048120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.379 ms 00:21:44.042 [2024-11-19 23:34:30.048134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.053695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.053769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:44.042 [2024-11-19 23:34:30.053781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.505 ms 00:21:44.042 [2024-11-19 23:34:30.053792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.054143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.054160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:44.042 [2024-11-19 23:34:30.054170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:21:44.042 [2024-11-19 23:34:30.054184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.105277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.105344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:44.042 [2024-11-19 23:34:30.105358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.033 ms 00:21:44.042 [2024-11-19 23:34:30.105380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.113142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.113204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:44.042 [2024-11-19 23:34:30.113217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.681 ms 00:21:44.042 [2024-11-19 23:34:30.113228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.119454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.119687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:44.042 [2024-11-19 23:34:30.119707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:21:44.042 [2024-11-19 23:34:30.119718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.126389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.126589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:44.042 [2024-11-19 23:34:30.126609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.603 ms 00:21:44.042 [2024-11-19 23:34:30.126623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.126672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.126685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:44.042 [2024-11-19 23:34:30.126696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:44.042 [2024-11-19 23:34:30.126716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.126850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.042 [2024-11-19 23:34:30.126871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:44.042 [2024-11-19 23:34:30.126885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:44.042 [2024-11-19 23:34:30.126895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.042 [2024-11-19 23:34:30.128105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4455.772 ms, result 0 00:21:44.042 { 00:21:44.042 "name": "ftl0", 00:21:44.042 "uuid": "2df5abd4-1489-4fe8-8391-19c1c8098aee" 00:21:44.042 } 00:21:44.042 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:44.042 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:44.303 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:44.303 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:44.303 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:44.565 /dev/nbd0 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:44.565 1+0 records in 00:21:44.565 1+0 records out 00:21:44.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481546 s, 8.5 MB/s 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:21:44.565 23:34:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:44.565 [2024-11-19 23:34:30.683682] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:21:44.565 [2024-11-19 23:34:30.683844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88924 ] 00:21:44.835 [2024-11-19 23:34:30.843690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.835 [2024-11-19 23:34:30.885795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.222  [2024-11-19T23:34:33.358Z] Copying: 185/1024 [MB] (185 MBps) [2024-11-19T23:34:34.301Z] Copying: 376/1024 [MB] (190 MBps) [2024-11-19T23:34:35.247Z] Copying: 570/1024 [MB] (194 MBps) [2024-11-19T23:34:35.813Z] Copying: 823/1024 [MB] (252 MBps) [2024-11-19T23:34:36.073Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:21:49.881 00:21:49.881 23:34:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:51.793 23:34:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:51.793 [2024-11-19 23:34:37.981425] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:21:51.793 [2024-11-19 23:34:37.981517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89005 ] 00:21:52.052 [2024-11-19 23:34:38.129875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.052 [2024-11-19 23:34:38.152511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.425  [2024-11-19T23:34:40.551Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-19T23:34:41.486Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-19T23:34:42.420Z] Copying: 64/1024 [MB] (24 MBps) [2024-11-19T23:34:43.354Z] Copying: 83/1024 [MB] (18 MBps) [2024-11-19T23:34:44.287Z] Copying: 101/1024 [MB] (18 MBps) [2024-11-19T23:34:45.222Z] Copying: 120/1024 [MB] (19 MBps) [2024-11-19T23:34:46.597Z] Copying: 148/1024 [MB] (27 MBps) [2024-11-19T23:34:47.533Z] Copying: 170/1024 [MB] (21 MBps) [2024-11-19T23:34:48.466Z] Copying: 194/1024 [MB] (24 MBps) [2024-11-19T23:34:49.398Z] Copying: 218/1024 [MB] (23 MBps) [2024-11-19T23:34:50.330Z] Copying: 243/1024 [MB] (24 MBps) [2024-11-19T23:34:51.264Z] Copying: 260/1024 [MB] (17 MBps) [2024-11-19T23:34:52.637Z] Copying: 284/1024 [MB] (24 MBps) [2024-11-19T23:34:53.571Z] Copying: 312/1024 [MB] (27 MBps) [2024-11-19T23:34:54.506Z] Copying: 333/1024 [MB] (21 MBps) [2024-11-19T23:34:55.498Z] Copying: 357/1024 [MB] (23 MBps) [2024-11-19T23:34:56.461Z] Copying: 380/1024 [MB] (23 MBps) [2024-11-19T23:34:57.397Z] Copying: 404/1024 [MB] (23 MBps) [2024-11-19T23:34:58.335Z] Copying: 433/1024 [MB] (29 MBps) [2024-11-19T23:34:59.268Z] Copying: 456/1024 [MB] (22 MBps) [2024-11-19T23:35:00.643Z] Copying: 481/1024 [MB] (24 MBps) [2024-11-19T23:35:01.576Z] Copying: 505/1024 [MB] (24 MBps) [2024-11-19T23:35:02.511Z] Copying: 529/1024 [MB] (23 MBps) [2024-11-19T23:35:03.446Z] Copying: 553/1024 [MB] (23 MBps) [2024-11-19T23:35:04.380Z] Copying: 578/1024 [MB] (24 MBps) [2024-11-19T23:35:05.313Z] Copying: 603/1024 [MB] (25 MBps) [2024-11-19T23:35:06.246Z] Copying: 629/1024 [MB] (25 MBps) [2024-11-19T23:35:07.622Z] Copying: 652/1024 [MB] (23 MBps) [2024-11-19T23:35:08.557Z] Copying: 677/1024 [MB] (24 MBps) [2024-11-19T23:35:09.492Z] Copying: 701/1024 [MB] (24 MBps) [2024-11-19T23:35:10.426Z] Copying: 727/1024 [MB] (25 MBps) [2024-11-19T23:35:11.360Z] Copying: 751/1024 [MB] (24 MBps) [2024-11-19T23:35:12.293Z] Copying: 778/1024 [MB] (26 MBps) [2024-11-19T23:35:13.227Z] Copying: 802/1024 [MB] (24 MBps) [2024-11-19T23:35:14.602Z] Copying: 827/1024 [MB] (25 MBps) [2024-11-19T23:35:15.533Z] Copying: 853/1024 [MB] (25 MBps) [2024-11-19T23:35:16.465Z] Copying: 880/1024 [MB] (26 MBps) [2024-11-19T23:35:17.401Z] Copying: 911/1024 [MB] (31 MBps) [2024-11-19T23:35:18.335Z] Copying: 938/1024 [MB] (27 MBps) [2024-11-19T23:35:19.278Z] Copying: 970/1024 [MB] (32 MBps) [2024-11-19T23:35:20.218Z] Copying: 996/1024 [MB] (25 MBps) [2024-11-19T23:35:20.478Z] Copying: 1024/1024 [MB] (average 24 MBps) 00:22:34.286 00:22:34.286 23:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:34.286 23:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:34.546 23:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:34.810 [2024-11-19 23:35:20.756289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.756323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:34.810 [2024-11-19 23:35:20.756335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:34.810 [2024-11-19 23:35:20.756341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.756360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:34.810 [2024-11-19 23:35:20.756747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.756769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:34.810 [2024-11-19 23:35:20.756776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:22:34.810 [2024-11-19 23:35:20.756783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.758614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.758739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:34.810 [2024-11-19 23:35:20.758753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:22:34.810 [2024-11-19 23:35:20.758760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.773737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.773771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:34.810 [2024-11-19 23:35:20.773781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.960 ms 00:22:34.810 [2024-11-19 23:35:20.773788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.778721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.778752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:34.810 [2024-11-19 23:35:20.778760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:22:34.810 [2024-11-19 23:35:20.778768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.780045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.780074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:34.810 [2024-11-19 23:35:20.780082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:22:34.810 [2024-11-19 23:35:20.780088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.784625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.784658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:34.810 [2024-11-19 23:35:20.784666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.510 ms 00:22:34.810 [2024-11-19 23:35:20.784675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.784783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.784793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:34.810 [2024-11-19 23:35:20.784800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:34.810 [2024-11-19 23:35:20.784810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.786713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.786748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:34.810 [2024-11-19 23:35:20.786756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:22:34.810 [2024-11-19 23:35:20.786763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.788672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.788790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:34.810 [2024-11-19 23:35:20.788801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:22:34.810 [2024-11-19 23:35:20.788808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.810 [2024-11-19 23:35:20.790400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.810 [2024-11-19 23:35:20.790425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:34.810 [2024-11-19 23:35:20.790432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:22:34.810 [2024-11-19 23:35:20.790439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.811 [2024-11-19 23:35:20.791926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.811 [2024-11-19 23:35:20.792023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:34.811 [2024-11-19 23:35:20.792034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:22:34.811 [2024-11-19 23:35:20.792040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.811 [2024-11-19 23:35:20.792063] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:34.811 [2024-11-19 23:35:20.792076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:34.811 [2024-11-19 23:35:20.792629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:34.812 [2024-11-19 23:35:20.792754] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:34.812 [2024-11-19 23:35:20.792760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2df5abd4-1489-4fe8-8391-19c1c8098aee 00:22:34.812 [2024-11-19 23:35:20.792769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:34.812 [2024-11-19 23:35:20.792774] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:34.812 [2024-11-19 23:35:20.792781] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:34.812 [2024-11-19 23:35:20.792790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:34.812 [2024-11-19 23:35:20.792797] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:34.812 [2024-11-19 23:35:20.792803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:34.812 [2024-11-19 23:35:20.792810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:34.812 [2024-11-19 23:35:20.792815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:34.812 [2024-11-19 23:35:20.792821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:34.812 [2024-11-19 23:35:20.792827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.812 [2024-11-19 23:35:20.792833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:34.812 [2024-11-19 23:35:20.792840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:22:34.812 [2024-11-19 23:35:20.792849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.794086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.812 [2024-11-19 23:35:20.794109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:34.812 [2024-11-19 23:35:20.794116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:22:34.812 [2024-11-19 23:35:20.794124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.794188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:34.812 [2024-11-19 23:35:20.794196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:34.812 [2024-11-19 23:35:20.794205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:34.812 [2024-11-19 23:35:20.794211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.798635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.798662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:34.812 [2024-11-19 23:35:20.798670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.798677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.798718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.798725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:34.812 [2024-11-19 23:35:20.798754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.798763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.798806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.798817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:34.812 [2024-11-19 23:35:20.798823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.798831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.798843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.798850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:34.812 [2024-11-19 23:35:20.798855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.798864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.806723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.806799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:34.812 [2024-11-19 23:35:20.806807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.806815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:34.812 [2024-11-19 23:35:20.813319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:34.812 [2024-11-19 23:35:20.813431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:34.812 [2024-11-19 23:35:20.813481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:34.812 [2024-11-19 23:35:20.813557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:34.812 [2024-11-19 23:35:20.813606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:34.812 [2024-11-19 23:35:20.813662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:34.812 [2024-11-19 23:35:20.813716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:34.812 [2024-11-19 23:35:20.813722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:34.812 [2024-11-19 23:35:20.813746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:34.812 [2024-11-19 23:35:20.813866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.534 ms, result 0 00:22:34.812 true 00:22:34.812 23:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88777 00:22:34.812 23:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88777 00:22:34.812 23:35:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:34.812 [2024-11-19 23:35:20.906222] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:22:34.812 [2024-11-19 23:35:20.906338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89453 ] 00:22:35.074 [2024-11-19 23:35:21.061685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.074 [2024-11-19 23:35:21.084122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.018  [2024-11-19T23:35:23.274Z] Copying: 258/1024 [MB] (258 MBps) [2024-11-19T23:35:24.217Z] Copying: 518/1024 [MB] (260 MBps) [2024-11-19T23:35:25.159Z] Copying: 778/1024 [MB] (259 MBps) [2024-11-19T23:35:25.422Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:22:39.230 00:22:39.230 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88777 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:39.230 23:35:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:39.230 [2024-11-19 23:35:25.289986] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:22:39.230 [2024-11-19 23:35:25.290122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89507 ] 00:22:39.492 [2024-11-19 23:35:25.446040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.492 [2024-11-19 23:35:25.464429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.492 [2024-11-19 23:35:25.546140] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.492 [2024-11-19 23:35:25.546191] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.492 [2024-11-19 23:35:25.607837] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:39.492 [2024-11-19 23:35:25.608122] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:39.492 [2024-11-19 23:35:25.608334] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:39.755 [2024-11-19 23:35:25.802315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.802348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:39.755 [2024-11-19 23:35:25.802359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:39.755 [2024-11-19 23:35:25.802365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.802398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.802407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:39.755 [2024-11-19 23:35:25.802413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:39.755 [2024-11-19 23:35:25.802419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.802431] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:39.755 [2024-11-19 23:35:25.802603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:39.755 [2024-11-19 23:35:25.802613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.802622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:39.755 [2024-11-19 23:35:25.802631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:22:39.755 [2024-11-19 23:35:25.802637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.803535] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:39.755 [2024-11-19 23:35:25.805504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.805627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:39.755 [2024-11-19 23:35:25.805640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:22:39.755 [2024-11-19 23:35:25.805646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.805684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.805691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:39.755 [2024-11-19 23:35:25.805699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:39.755 [2024-11-19 23:35:25.805705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.809966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.809990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:39.755 [2024-11-19 23:35:25.809998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:22:39.755 [2024-11-19 23:35:25.810004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.810071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.810078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:39.755 [2024-11-19 23:35:25.810087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:39.755 [2024-11-19 23:35:25.810096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.810144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.810151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:39.755 [2024-11-19 23:35:25.810160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:39.755 [2024-11-19 23:35:25.810166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.810181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:39.755 [2024-11-19 23:35:25.811296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.811320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:39.755 [2024-11-19 23:35:25.811326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:22:39.755 [2024-11-19 23:35:25.811335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.811356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.755 [2024-11-19 23:35:25.811362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:39.755 [2024-11-19 23:35:25.811372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:39.755 [2024-11-19 23:35:25.811379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.755 [2024-11-19 23:35:25.811393] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:39.755 [2024-11-19 23:35:25.811409] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:39.755 [2024-11-19 23:35:25.811438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:39.755 [2024-11-19 23:35:25.811454] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:39.755 [2024-11-19 23:35:25.811531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:39.755 [2024-11-19 23:35:25.811541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:39.755 [2024-11-19 23:35:25.811549] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:39.756 [2024-11-19 23:35:25.811560] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:39.756 [2024-11-19 23:35:25.811580] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:39.756 [2024-11-19 23:35:25.811585] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:39.756 [2024-11-19 23:35:25.811591] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:39.756 [2024-11-19 23:35:25.811598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.756 [2024-11-19 23:35:25.811603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:39.756 [2024-11-19 23:35:25.811609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:22:39.756 [2024-11-19 23:35:25.811614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.756 [2024-11-19 23:35:25.811683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.756 [2024-11-19 23:35:25.811690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:39.756 [2024-11-19 23:35:25.811696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:39.756 [2024-11-19 23:35:25.811704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.756 [2024-11-19 23:35:25.811792] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:39.756 [2024-11-19 23:35:25.811806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:39.756 [2024-11-19 23:35:25.811812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:39.756 [2024-11-19 23:35:25.811828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:39.756 [2024-11-19 23:35:25.811844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.756 [2024-11-19 23:35:25.811854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:39.756 [2024-11-19 23:35:25.811863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:39.756 [2024-11-19 23:35:25.811872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.756 [2024-11-19 23:35:25.811877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:39.756 [2024-11-19 23:35:25.811882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:39.756 [2024-11-19 23:35:25.811887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:39.756 [2024-11-19 23:35:25.811905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:39.756 [2024-11-19 23:35:25.811920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:39.756 [2024-11-19 23:35:25.811934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:39.756 [2024-11-19 23:35:25.811950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:39.756 [2024-11-19 23:35:25.811971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.756 [2024-11-19 23:35:25.811982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:39.756 [2024-11-19 23:35:25.811988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:39.756 [2024-11-19 23:35:25.811994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.756 [2024-11-19 23:35:25.811999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:39.756 [2024-11-19 23:35:25.812005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:39.756 [2024-11-19 23:35:25.812010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.756 [2024-11-19 23:35:25.812016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:39.756 [2024-11-19 23:35:25.812022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:39.756 [2024-11-19 23:35:25.812027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.812033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:39.756 [2024-11-19 23:35:25.812038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:39.756 [2024-11-19 23:35:25.812044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.812052] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:39.756 [2024-11-19 23:35:25.812060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:39.756 [2024-11-19 23:35:25.812065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.756 [2024-11-19 23:35:25.812071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.756 [2024-11-19 23:35:25.812076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:39.756 [2024-11-19 23:35:25.812081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:39.756 [2024-11-19 23:35:25.812086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:39.756 [2024-11-19 23:35:25.812090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:39.756 [2024-11-19 23:35:25.812095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:39.756 [2024-11-19 23:35:25.812099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:39.756 [2024-11-19 23:35:25.812105] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:39.756 [2024-11-19 23:35:25.812112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:39.756 [2024-11-19 23:35:25.812123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:39.756 [2024-11-19 23:35:25.812128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:39.756 [2024-11-19 23:35:25.812133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:39.756 [2024-11-19 23:35:25.812138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:39.756 [2024-11-19 23:35:25.812145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:39.756 [2024-11-19 23:35:25.812150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:39.756 [2024-11-19 23:35:25.812155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:39.756 [2024-11-19 23:35:25.812160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:39.756 [2024-11-19 23:35:25.812165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:39.756 [2024-11-19 23:35:25.812191] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:39.756 [2024-11-19 23:35:25.812197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:39.756 [2024-11-19 23:35:25.812209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:39.756 [2024-11-19 23:35:25.812214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:39.756 [2024-11-19 23:35:25.812219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:39.756 [2024-11-19 23:35:25.812227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.756 [2024-11-19 23:35:25.812237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:39.756 [2024-11-19 23:35:25.812242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:22:39.756 [2024-11-19 23:35:25.812248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.756 [2024-11-19 23:35:25.820104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.756 [2024-11-19 23:35:25.820202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.756 [2024-11-19 23:35:25.820250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.824 ms 00:22:39.756 [2024-11-19 23:35:25.820268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.756 [2024-11-19 23:35:25.820338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.756 [2024-11-19 23:35:25.820434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.757 [2024-11-19 23:35:25.820457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:39.757 [2024-11-19 23:35:25.820472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.838201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.838330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.757 [2024-11-19 23:35:25.838389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.682 ms 00:22:39.757 [2024-11-19 23:35:25.838412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.838670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.838822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.757 [2024-11-19 23:35:25.838879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:39.757 [2024-11-19 23:35:25.838952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.839394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.839612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.757 [2024-11-19 23:35:25.839786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:22:39.757 [2024-11-19 23:35:25.839853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.840238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.840411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.757 [2024-11-19 23:35:25.840560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:22:39.757 [2024-11-19 23:35:25.840621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.849462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.849688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.757 [2024-11-19 23:35:25.849897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.787 ms 00:22:39.757 [2024-11-19 23:35:25.849964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.853316] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:39.757 [2024-11-19 23:35:25.853433] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:39.757 [2024-11-19 23:35:25.853490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.853510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:39.757 [2024-11-19 23:35:25.853529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:22:39.757 [2024-11-19 23:35:25.853547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.867856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.867970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:39.757 [2024-11-19 23:35:25.868020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.268 ms 00:22:39.757 [2024-11-19 23:35:25.868042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.870008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.870103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:39.757 [2024-11-19 23:35:25.870148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:22:39.757 [2024-11-19 23:35:25.870169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.872020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.872114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:39.757 [2024-11-19 23:35:25.872158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.815 ms 00:22:39.757 [2024-11-19 23:35:25.872178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.872500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.872545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.757 [2024-11-19 23:35:25.872599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:22:39.757 [2024-11-19 23:35:25.872620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.888577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.888714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:39.757 [2024-11-19 23:35:25.888787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.928 ms 00:22:39.757 [2024-11-19 23:35:25.888814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.896307] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.757 [2024-11-19 23:35:25.898690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.898802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.757 [2024-11-19 23:35:25.898851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.796 ms 00:22:39.757 [2024-11-19 23:35:25.898872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.898938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.898967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:39.757 [2024-11-19 23:35:25.898987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:39.757 [2024-11-19 23:35:25.899008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.899106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.899151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.757 [2024-11-19 23:35:25.899176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:39.757 [2024-11-19 23:35:25.899195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.899227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.899248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.757 [2024-11-19 23:35:25.899574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.757 [2024-11-19 23:35:25.899618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.899689] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:39.757 [2024-11-19 23:35:25.899716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.899753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:39.757 [2024-11-19 23:35:25.899775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:39.757 [2024-11-19 23:35:25.900072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.904109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.904159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.757 [2024-11-19 23:35:25.904171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:22:39.757 [2024-11-19 23:35:25.904179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.904252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.757 [2024-11-19 23:35:25.904263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.757 [2024-11-19 23:35:25.904271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:39.757 [2024-11-19 23:35:25.904279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.757 [2024-11-19 23:35:25.905157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.421 ms, result 0 00:22:41.143  [2024-11-19T23:35:28.277Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-19T23:35:29.221Z] Copying: 61/1024 [MB] (34 MBps) [2024-11-19T23:35:30.166Z] Copying: 83/1024 [MB] (22 MBps) [2024-11-19T23:35:31.109Z] Copying: 102/1024 [MB] (18 MBps) [2024-11-19T23:35:32.052Z] Copying: 127/1024 [MB] (25 MBps) [2024-11-19T23:35:32.991Z] Copying: 141/1024 [MB] (13 MBps) [2024-11-19T23:35:33.935Z] Copying: 154/1024 [MB] (12 MBps) [2024-11-19T23:35:35.319Z] Copying: 175/1024 [MB] (21 MBps) [2024-11-19T23:35:36.265Z] Copying: 188/1024 [MB] (13 MBps) [2024-11-19T23:35:37.205Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-19T23:35:38.148Z] Copying: 216/1024 [MB] (16 MBps) [2024-11-19T23:35:39.091Z] Copying: 233/1024 [MB] (16 MBps) [2024-11-19T23:35:40.034Z] Copying: 252/1024 [MB] (18 MBps) [2024-11-19T23:35:40.978Z] Copying: 280/1024 [MB] (28 MBps) [2024-11-19T23:35:41.921Z] Copying: 321/1024 [MB] (41 MBps) [2024-11-19T23:35:43.308Z] Copying: 367/1024 [MB] (45 MBps) [2024-11-19T23:35:44.253Z] Copying: 401/1024 [MB] (33 MBps) [2024-11-19T23:35:45.197Z] Copying: 439/1024 [MB] (38 MBps) [2024-11-19T23:35:46.150Z] Copying: 466/1024 [MB] (26 MBps) [2024-11-19T23:35:47.095Z] Copying: 489/1024 [MB] (23 MBps) [2024-11-19T23:35:48.043Z] Copying: 508/1024 [MB] (19 MBps) [2024-11-19T23:35:48.987Z] Copying: 524/1024 [MB] (15 MBps) [2024-11-19T23:35:49.929Z] Copying: 550/1024 [MB] (25 MBps) [2024-11-19T23:35:51.317Z] Copying: 585/1024 [MB] (35 MBps) [2024-11-19T23:35:52.262Z] Copying: 604/1024 [MB] (18 MBps) [2024-11-19T23:35:53.217Z] Copying: 627/1024 [MB] (23 MBps) [2024-11-19T23:35:54.164Z] Copying: 648/1024 [MB] (21 MBps) [2024-11-19T23:35:55.107Z] Copying: 670/1024 [MB] (21 MBps) [2024-11-19T23:35:56.052Z] Copying: 687/1024 [MB] (16 MBps) [2024-11-19T23:35:56.994Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-19T23:35:57.939Z] Copying: 721/1024 [MB] (19 MBps) [2024-11-19T23:35:59.327Z] Copying: 734/1024 [MB] (13 MBps) [2024-11-19T23:36:00.308Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-19T23:36:00.919Z] Copying: 771/1024 [MB] (22 MBps) [2024-11-19T23:36:02.304Z] Copying: 795/1024 [MB] (24 MBps) [2024-11-19T23:36:03.245Z] Copying: 835/1024 [MB] (40 MBps) [2024-11-19T23:36:04.187Z] Copying: 867/1024 [MB] (31 MBps) [2024-11-19T23:36:05.130Z] Copying: 891/1024 [MB] (23 MBps) [2024-11-19T23:36:06.074Z] Copying: 925/1024 [MB] (34 MBps) [2024-11-19T23:36:07.017Z] Copying: 935/1024 [MB] (10 MBps) [2024-11-19T23:36:07.960Z] Copying: 959/1024 [MB] (23 MBps) [2024-11-19T23:36:09.345Z] Copying: 984/1024 [MB] (25 MBps) [2024-11-19T23:36:09.919Z] Copying: 1019/1024 [MB] (35 MBps) [2024-11-19T23:36:10.180Z] Copying: 1048520/1048576 [kB] (4248 kBps) [2024-11-19T23:36:10.180Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 23:36:09.979887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:09.979964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:23.988 [2024-11-19 23:36:09.979983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:23.988 [2024-11-19 23:36:09.979992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.988 [2024-11-19 23:36:09.980987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:23.988 [2024-11-19 23:36:09.983723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:09.983786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:23.988 [2024-11-19 23:36:09.983799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:23:23.988 [2024-11-19 23:36:09.983809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.988 [2024-11-19 23:36:09.997514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:09.997562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:23.988 [2024-11-19 23:36:09.997586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.411 ms 00:23:23.988 [2024-11-19 23:36:09.997599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.988 [2024-11-19 23:36:10.024910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:10.025142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:23.988 [2024-11-19 23:36:10.025164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.294 ms 00:23:23.988 [2024-11-19 23:36:10.025174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.988 [2024-11-19 23:36:10.031374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:10.031415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:23.988 [2024-11-19 23:36:10.031427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.161 ms 00:23:23.988 [2024-11-19 23:36:10.031435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.988 [2024-11-19 23:36:10.034017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:10.034063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.988 [2024-11-19 23:36:10.034074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:23:23.988 [2024-11-19 23:36:10.034082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.988 [2024-11-19 23:36:10.039602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.988 [2024-11-19 23:36:10.039814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.989 [2024-11-19 23:36:10.039835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.480 ms 00:23:23.989 [2024-11-19 23:36:10.039845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-11-19 23:36:10.269049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-11-19 23:36:10.269226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:24.251 [2024-11-19 23:36:10.269299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 229.066 ms 00:23:24.251 [2024-11-19 23:36:10.269324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-11-19 23:36:10.272224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-11-19 23:36:10.272386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:24.251 [2024-11-19 23:36:10.272455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:23:24.251 [2024-11-19 23:36:10.272481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-11-19 23:36:10.274474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-11-19 23:36:10.274629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:24.251 [2024-11-19 23:36:10.274694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:23:24.251 [2024-11-19 23:36:10.274716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-11-19 23:36:10.276296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-11-19 23:36:10.276449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:24.251 [2024-11-19 23:36:10.276515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:23:24.251 [2024-11-19 23:36:10.276538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-11-19 23:36:10.278102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.251 [2024-11-19 23:36:10.278257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:24.251 [2024-11-19 23:36:10.278321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:23:24.251 [2024-11-19 23:36:10.278343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.251 [2024-11-19 23:36:10.278387] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:24.251 [2024-11-19 23:36:10.278417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101376 / 261120 wr_cnt: 1 state: open 00:23:24.251 [2024-11-19 23:36:10.278508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.278968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.279983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:24.251 [2024-11-19 23:36:10.280139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.280722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.281143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.281388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.281552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.281711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.281906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:24.252 [2024-11-19 23:36:10.282870] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:24.252 [2024-11-19 23:36:10.282924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2df5abd4-1489-4fe8-8391-19c1c8098aee 00:23:24.252 [2024-11-19 23:36:10.282949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101376 00:23:24.252 [2024-11-19 23:36:10.282969] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102336 00:23:24.252 [2024-11-19 23:36:10.282989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101376 00:23:24.252 [2024-11-19 23:36:10.283014] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:23:24.252 [2024-11-19 23:36:10.283035] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:24.252 [2024-11-19 23:36:10.283078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:24.252 [2024-11-19 23:36:10.283099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:24.252 [2024-11-19 23:36:10.283118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:24.252 [2024-11-19 23:36:10.283138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:24.252 [2024-11-19 23:36:10.283164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.252 [2024-11-19 23:36:10.283188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:24.252 [2024-11-19 23:36:10.283213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:23:24.252 [2024-11-19 23:36:10.283240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.252 [2024-11-19 23:36:10.286492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.252 [2024-11-19 23:36:10.286573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:24.252 [2024-11-19 23:36:10.286599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:23:24.252 [2024-11-19 23:36:10.286619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.252 [2024-11-19 23:36:10.286839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.252 [2024-11-19 23:36:10.286879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:24.252 [2024-11-19 23:36:10.286905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:23:24.252 [2024-11-19 23:36:10.286926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.252 [2024-11-19 23:36:10.296578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.252 [2024-11-19 23:36:10.296642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.252 [2024-11-19 23:36:10.296658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.252 [2024-11-19 23:36:10.296667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.252 [2024-11-19 23:36:10.296773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.252 [2024-11-19 23:36:10.296786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.253 [2024-11-19 23:36:10.296801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.296810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.296873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.296883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.253 [2024-11-19 23:36:10.296892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.296900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.296917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.296933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.253 [2024-11-19 23:36:10.296944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.296952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.311410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.311465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.253 [2024-11-19 23:36:10.311478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.311486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.253 [2024-11-19 23:36:10.323453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.253 [2024-11-19 23:36:10.323537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.253 [2024-11-19 23:36:10.323632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.253 [2024-11-19 23:36:10.323760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:24.253 [2024-11-19 23:36:10.323818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.253 [2024-11-19 23:36:10.323901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.323957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:24.253 [2024-11-19 23:36:10.323970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.253 [2024-11-19 23:36:10.323986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:24.253 [2024-11-19 23:36:10.323999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.253 [2024-11-19 23:36:10.324135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.368 ms, result 0 00:23:25.196 00:23:25.196 00:23:25.196 23:36:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:27.746 23:36:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:27.746 [2024-11-19 23:36:13.471555] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:23:27.746 [2024-11-19 23:36:13.471667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90003 ] 00:23:27.746 [2024-11-19 23:36:13.626917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.746 [2024-11-19 23:36:13.656701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.746 [2024-11-19 23:36:13.767592] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.746 [2024-11-19 23:36:13.767672] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:27.746 [2024-11-19 23:36:13.929760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.746 [2024-11-19 23:36:13.929820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:27.746 [2024-11-19 23:36:13.929841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:27.746 [2024-11-19 23:36:13.929850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.746 [2024-11-19 23:36:13.929910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.746 [2024-11-19 23:36:13.929921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:27.746 [2024-11-19 23:36:13.929930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:27.746 [2024-11-19 23:36:13.929941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.746 [2024-11-19 23:36:13.929969] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:27.746 [2024-11-19 23:36:13.930260] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:27.746 [2024-11-19 23:36:13.930279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:27.746 [2024-11-19 23:36:13.930288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:27.746 [2024-11-19 23:36:13.930297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:23:27.746 [2024-11-19 23:36:13.930308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:27.746 [2024-11-19 23:36:13.932222] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:28.009 [2024-11-19 23:36:13.936218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.936283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:28.009 [2024-11-19 23:36:13.936299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:23:28.009 [2024-11-19 23:36:13.936311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.936390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.936400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:28.009 [2024-11-19 23:36:13.936409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:28.009 [2024-11-19 23:36:13.936417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.944771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.944812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.009 [2024-11-19 23:36:13.944834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.307 ms 00:23:28.009 [2024-11-19 23:36:13.944846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.944946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.944956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.009 [2024-11-19 23:36:13.944970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:28.009 [2024-11-19 23:36:13.944977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.945044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.945056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:28.009 [2024-11-19 23:36:13.945064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:28.009 [2024-11-19 23:36:13.945071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.945099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:28.009 [2024-11-19 23:36:13.947225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.947265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.009 [2024-11-19 23:36:13.947275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:23:28.009 [2024-11-19 23:36:13.947289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.947336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.947345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:28.009 [2024-11-19 23:36:13.947354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:28.009 [2024-11-19 23:36:13.947362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.947388] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:28.009 [2024-11-19 23:36:13.947409] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:28.009 [2024-11-19 23:36:13.947447] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:28.009 [2024-11-19 23:36:13.947468] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:28.009 [2024-11-19 23:36:13.947578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:28.009 [2024-11-19 23:36:13.947591] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:28.009 [2024-11-19 23:36:13.947602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:28.009 [2024-11-19 23:36:13.947617] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:28.009 [2024-11-19 23:36:13.947628] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:28.009 [2024-11-19 23:36:13.947636] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:28.009 [2024-11-19 23:36:13.947645] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:28.009 [2024-11-19 23:36:13.947656] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:28.009 [2024-11-19 23:36:13.947664] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:28.009 [2024-11-19 23:36:13.947673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.947682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:28.009 [2024-11-19 23:36:13.947691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:23:28.009 [2024-11-19 23:36:13.947699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.947802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.009 [2024-11-19 23:36:13.947818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:28.009 [2024-11-19 23:36:13.947826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:23:28.009 [2024-11-19 23:36:13.947834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.009 [2024-11-19 23:36:13.947954] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:28.009 [2024-11-19 23:36:13.947967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:28.009 [2024-11-19 23:36:13.947976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.009 [2024-11-19 23:36:13.947990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.009 [2024-11-19 23:36:13.947998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:28.009 [2024-11-19 23:36:13.948011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:28.009 [2024-11-19 23:36:13.948026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:28.009 [2024-11-19 23:36:13.948036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:28.009 [2024-11-19 23:36:13.948044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:28.009 [2024-11-19 23:36:13.948050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.009 [2024-11-19 23:36:13.948058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:28.009 [2024-11-19 23:36:13.948065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:28.010 [2024-11-19 23:36:13.948074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.010 [2024-11-19 23:36:13.948081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:28.010 [2024-11-19 23:36:13.948089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:28.010 [2024-11-19 23:36:13.948104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:28.010 [2024-11-19 23:36:13.948118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:28.010 [2024-11-19 23:36:13.948138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:28.010 [2024-11-19 23:36:13.948166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:28.010 [2024-11-19 23:36:13.948194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:28.010 [2024-11-19 23:36:13.948215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:28.010 [2024-11-19 23:36:13.948236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.010 [2024-11-19 23:36:13.948249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:28.010 [2024-11-19 23:36:13.948257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:28.010 [2024-11-19 23:36:13.948264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.010 [2024-11-19 23:36:13.948271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:28.010 [2024-11-19 23:36:13.948280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:28.010 [2024-11-19 23:36:13.948286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:28.010 [2024-11-19 23:36:13.948299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:28.010 [2024-11-19 23:36:13.948306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948312] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:28.010 [2024-11-19 23:36:13.948320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:28.010 [2024-11-19 23:36:13.948330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.010 [2024-11-19 23:36:13.948353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:28.010 [2024-11-19 23:36:13.948361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:28.010 [2024-11-19 23:36:13.948369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:28.010 [2024-11-19 23:36:13.948376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:28.010 [2024-11-19 23:36:13.948382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:28.010 [2024-11-19 23:36:13.948389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:28.010 [2024-11-19 23:36:13.948398] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:28.010 [2024-11-19 23:36:13.948411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:28.010 [2024-11-19 23:36:13.948428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:28.010 [2024-11-19 23:36:13.948435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:28.010 [2024-11-19 23:36:13.948442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:28.010 [2024-11-19 23:36:13.948449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:28.010 [2024-11-19 23:36:13.948457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:28.010 [2024-11-19 23:36:13.948464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:28.010 [2024-11-19 23:36:13.948471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:28.010 [2024-11-19 23:36:13.948478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:28.010 [2024-11-19 23:36:13.948486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:28.010 [2024-11-19 23:36:13.948523] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:28.010 [2024-11-19 23:36:13.948534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:28.010 [2024-11-19 23:36:13.948551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:28.010 [2024-11-19 23:36:13.948558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:28.010 [2024-11-19 23:36:13.948564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:28.010 [2024-11-19 23:36:13.948571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.948581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:28.010 [2024-11-19 23:36:13.948594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:23:28.010 [2024-11-19 23:36:13.948601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.963050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.963097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:28.010 [2024-11-19 23:36:13.963120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.400 ms 00:23:28.010 [2024-11-19 23:36:13.963131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.963223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.963232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:28.010 [2024-11-19 23:36:13.963245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:28.010 [2024-11-19 23:36:13.963252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.982411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.982474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.010 [2024-11-19 23:36:13.982491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.095 ms 00:23:28.010 [2024-11-19 23:36:13.982508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.982572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.982585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.010 [2024-11-19 23:36:13.982596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:28.010 [2024-11-19 23:36:13.982606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.983239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.983283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.010 [2024-11-19 23:36:13.983299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:23:28.010 [2024-11-19 23:36:13.983311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.983507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.983521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.010 [2024-11-19 23:36:13.983533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:23:28.010 [2024-11-19 23:36:13.983543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.992158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.992213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.010 [2024-11-19 23:36:13.992231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.590 ms 00:23:28.010 [2024-11-19 23:36:13.992240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.010 [2024-11-19 23:36:13.996196] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:28.010 [2024-11-19 23:36:13.996250] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:28.010 [2024-11-19 23:36:13.996263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.010 [2024-11-19 23:36:13.996271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:28.010 [2024-11-19 23:36:13.996281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:23:28.010 [2024-11-19 23:36:13.996289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.012360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.012438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:28.011 [2024-11-19 23:36:14.012451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.013 ms 00:23:28.011 [2024-11-19 23:36:14.012460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.015603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.015651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:28.011 [2024-11-19 23:36:14.015662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:23:28.011 [2024-11-19 23:36:14.015670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.018398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.018446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:28.011 [2024-11-19 23:36:14.018456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:23:28.011 [2024-11-19 23:36:14.018465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.018840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.018855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:28.011 [2024-11-19 23:36:14.018865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:28.011 [2024-11-19 23:36:14.018873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.044144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.044201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:28.011 [2024-11-19 23:36:14.044214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.252 ms 00:23:28.011 [2024-11-19 23:36:14.044223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.052371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:28.011 [2024-11-19 23:36:14.055473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.055522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:28.011 [2024-11-19 23:36:14.055534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.201 ms 00:23:28.011 [2024-11-19 23:36:14.055543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.055625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.055636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:28.011 [2024-11-19 23:36:14.055646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:28.011 [2024-11-19 23:36:14.055654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.057402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.057445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:28.011 [2024-11-19 23:36:14.057460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.705 ms 00:23:28.011 [2024-11-19 23:36:14.057469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.057495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.057505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:28.011 [2024-11-19 23:36:14.057514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:28.011 [2024-11-19 23:36:14.057528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.057568] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:28.011 [2024-11-19 23:36:14.057578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.057586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:28.011 [2024-11-19 23:36:14.057597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:28.011 [2024-11-19 23:36:14.057609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.063138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.063178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:28.011 [2024-11-19 23:36:14.063199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.510 ms 00:23:28.011 [2024-11-19 23:36:14.063211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.063394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.011 [2024-11-19 23:36:14.063418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:28.011 [2024-11-19 23:36:14.063428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:28.011 [2024-11-19 23:36:14.063437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.011 [2024-11-19 23:36:14.065063] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.853 ms, result 0 00:23:29.397  [2024-11-19T23:36:16.534Z] Copying: 1028/1048576 [kB] (1028 kBps) [2024-11-19T23:36:17.494Z] Copying: 4176/1048576 [kB] (3148 kBps) [2024-11-19T23:36:18.438Z] Copying: 14/1024 [MB] (10 MBps) [2024-11-19T23:36:19.385Z] Copying: 39/1024 [MB] (25 MBps) [2024-11-19T23:36:20.330Z] Copying: 55/1024 [MB] (15 MBps) [2024-11-19T23:36:21.274Z] Copying: 70/1024 [MB] (15 MBps) [2024-11-19T23:36:22.668Z] Copying: 86/1024 [MB] (16 MBps) [2024-11-19T23:36:23.614Z] Copying: 102/1024 [MB] (15 MBps) [2024-11-19T23:36:24.557Z] Copying: 126/1024 [MB] (23 MBps) [2024-11-19T23:36:25.497Z] Copying: 143/1024 [MB] (17 MBps) [2024-11-19T23:36:26.438Z] Copying: 171/1024 [MB] (27 MBps) [2024-11-19T23:36:27.380Z] Copying: 196/1024 [MB] (25 MBps) [2024-11-19T23:36:28.321Z] Copying: 233/1024 [MB] (36 MBps) [2024-11-19T23:36:29.263Z] Copying: 259/1024 [MB] (25 MBps) [2024-11-19T23:36:30.721Z] Copying: 287/1024 [MB] (28 MBps) [2024-11-19T23:36:31.292Z] Copying: 314/1024 [MB] (26 MBps) [2024-11-19T23:36:32.274Z] Copying: 343/1024 [MB] (29 MBps) [2024-11-19T23:36:33.656Z] Copying: 365/1024 [MB] (21 MBps) [2024-11-19T23:36:34.597Z] Copying: 390/1024 [MB] (24 MBps) [2024-11-19T23:36:35.538Z] Copying: 418/1024 [MB] (28 MBps) [2024-11-19T23:36:36.480Z] Copying: 448/1024 [MB] (30 MBps) [2024-11-19T23:36:37.423Z] Copying: 476/1024 [MB] (27 MBps) [2024-11-19T23:36:38.363Z] Copying: 501/1024 [MB] (24 MBps) [2024-11-19T23:36:39.306Z] Copying: 529/1024 [MB] (28 MBps) [2024-11-19T23:36:40.694Z] Copying: 558/1024 [MB] (28 MBps) [2024-11-19T23:36:41.266Z] Copying: 589/1024 [MB] (31 MBps) [2024-11-19T23:36:42.662Z] Copying: 617/1024 [MB] (28 MBps) [2024-11-19T23:36:43.604Z] Copying: 644/1024 [MB] (26 MBps) [2024-11-19T23:36:44.548Z] Copying: 669/1024 [MB] (25 MBps) [2024-11-19T23:36:45.492Z] Copying: 694/1024 [MB] (25 MBps) [2024-11-19T23:36:46.434Z] Copying: 718/1024 [MB] (23 MBps) [2024-11-19T23:36:47.376Z] Copying: 740/1024 [MB] (22 MBps) [2024-11-19T23:36:48.319Z] Copying: 769/1024 [MB] (28 MBps) [2024-11-19T23:36:49.260Z] Copying: 796/1024 [MB] (26 MBps) [2024-11-19T23:36:50.645Z] Copying: 831/1024 [MB] (35 MBps) [2024-11-19T23:36:51.588Z] Copying: 861/1024 [MB] (30 MBps) [2024-11-19T23:36:52.532Z] Copying: 888/1024 [MB] (26 MBps) [2024-11-19T23:36:53.475Z] Copying: 911/1024 [MB] (22 MBps) [2024-11-19T23:36:54.417Z] Copying: 942/1024 [MB] (31 MBps) [2024-11-19T23:36:55.361Z] Copying: 970/1024 [MB] (27 MBps) [2024-11-19T23:36:56.302Z] Copying: 997/1024 [MB] (27 MBps) [2024-11-19T23:36:56.564Z] Copying: 1022/1024 [MB] (25 MBps) [2024-11-19T23:36:56.564Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 23:36:56.414016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.372 [2024-11-19 23:36:56.414096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:10.372 [2024-11-19 23:36:56.414114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:10.372 [2024-11-19 23:36:56.414124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.372 [2024-11-19 23:36:56.414150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:10.372 [2024-11-19 23:36:56.415027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.372 [2024-11-19 23:36:56.415073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:10.372 [2024-11-19 23:36:56.415086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:24:10.372 [2024-11-19 23:36:56.415096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.372 [2024-11-19 23:36:56.415351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.372 [2024-11-19 23:36:56.415363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:10.372 [2024-11-19 23:36:56.415373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:24:10.372 [2024-11-19 23:36:56.415381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.372 [2024-11-19 23:36:56.432294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.372 [2024-11-19 23:36:56.432357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:10.372 [2024-11-19 23:36:56.432372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.893 ms 00:24:10.372 [2024-11-19 23:36:56.432391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.372 [2024-11-19 23:36:56.438524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.372 [2024-11-19 23:36:56.438581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:10.373 [2024-11-19 23:36:56.438593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:24:10.373 [2024-11-19 23:36:56.438601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.441532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.441585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:10.373 [2024-11-19 23:36:56.441595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.866 ms 00:24:10.373 [2024-11-19 23:36:56.441603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.446721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.446790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:10.373 [2024-11-19 23:36:56.446810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.076 ms 00:24:10.373 [2024-11-19 23:36:56.446819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.451691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.451770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:10.373 [2024-11-19 23:36:56.451789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:24:10.373 [2024-11-19 23:36:56.451802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.455356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.455433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:10.373 [2024-11-19 23:36:56.455448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:24:10.373 [2024-11-19 23:36:56.455460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.458237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.458297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:10.373 [2024-11-19 23:36:56.458312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:24:10.373 [2024-11-19 23:36:56.458324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.460831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.460891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:10.373 [2024-11-19 23:36:56.460906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:24:10.373 [2024-11-19 23:36:56.460918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.463512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.373 [2024-11-19 23:36:56.463570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:10.373 [2024-11-19 23:36:56.463585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.505 ms 00:24:10.373 [2024-11-19 23:36:56.463597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.373 [2024-11-19 23:36:56.463644] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:10.373 [2024-11-19 23:36:56.463665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:10.373 [2024-11-19 23:36:56.463681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:10.373 [2024-11-19 23:36:56.463695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.463896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:10.373 [2024-11-19 23:36:56.464886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.464907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.464926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.464945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.464967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.464985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:10.374 [2024-11-19 23:36:56.465917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:10.374 [2024-11-19 23:36:56.465946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2df5abd4-1489-4fe8-8391-19c1c8098aee 00:24:10.374 [2024-11-19 23:36:56.465991] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:10.374 [2024-11-19 23:36:56.466009] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 163264 00:24:10.374 [2024-11-19 23:36:56.466028] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 161280 00:24:10.374 [2024-11-19 23:36:56.466049] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0123 00:24:10.374 [2024-11-19 23:36:56.466067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:10.374 [2024-11-19 23:36:56.466084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:10.374 [2024-11-19 23:36:56.466101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:10.374 [2024-11-19 23:36:56.466117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:10.374 [2024-11-19 23:36:56.466133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:10.374 [2024-11-19 23:36:56.466162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.374 [2024-11-19 23:36:56.466226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:10.374 [2024-11-19 23:36:56.466256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:24:10.374 [2024-11-19 23:36:56.466275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.374 [2024-11-19 23:36:56.470086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.374 [2024-11-19 23:36:56.470157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:10.374 [2024-11-19 23:36:56.470179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.720 ms 00:24:10.374 [2024-11-19 23:36:56.470198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.374 [2024-11-19 23:36:56.470393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.374 [2024-11-19 23:36:56.470426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:10.374 [2024-11-19 23:36:56.470461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:24:10.374 [2024-11-19 23:36:56.470490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.374 [2024-11-19 23:36:56.481118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.374 [2024-11-19 23:36:56.481170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.374 [2024-11-19 23:36:56.481182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.374 [2024-11-19 23:36:56.481192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.374 [2024-11-19 23:36:56.481288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.374 [2024-11-19 23:36:56.481299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.375 [2024-11-19 23:36:56.481318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.481333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.481409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.481462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.375 [2024-11-19 23:36:56.481473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.481482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.481501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.481511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.375 [2024-11-19 23:36:56.481522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.481534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.501206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.501265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.375 [2024-11-19 23:36:56.501278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.501288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.516220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.516274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.375 [2024-11-19 23:36:56.516287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.516309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.516367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.516379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.375 [2024-11-19 23:36:56.516388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.516397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.516474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.516667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.375 [2024-11-19 23:36:56.516676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.516686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.516799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.516820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.375 [2024-11-19 23:36:56.516830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.516840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.516877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.516889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:10.375 [2024-11-19 23:36:56.516900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.516910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.516972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.516986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.375 [2024-11-19 23:36:56.517000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.517010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.517070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:10.375 [2024-11-19 23:36:56.517093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.375 [2024-11-19 23:36:56.517103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:10.375 [2024-11-19 23:36:56.517112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.375 [2024-11-19 23:36:56.517287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 103.216 ms, result 0 00:24:10.636 00:24:10.636 00:24:10.636 23:36:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:13.178 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:13.178 23:36:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:13.178 [2024-11-19 23:36:58.912813] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:24:13.178 [2024-11-19 23:36:58.912933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90463 ] 00:24:13.178 [2024-11-19 23:36:59.070994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.178 [2024-11-19 23:36:59.108849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.178 [2024-11-19 23:36:59.250506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.178 [2024-11-19 23:36:59.250597] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:13.440 [2024-11-19 23:36:59.414475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.414548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:13.440 [2024-11-19 23:36:59.414568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:13.440 [2024-11-19 23:36:59.414576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.414637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.414649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.440 [2024-11-19 23:36:59.414658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:13.440 [2024-11-19 23:36:59.414666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.414690] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:13.440 [2024-11-19 23:36:59.415004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:13.440 [2024-11-19 23:36:59.415039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.415049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.440 [2024-11-19 23:36:59.415058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:24:13.440 [2024-11-19 23:36:59.415069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.416956] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:13.440 [2024-11-19 23:36:59.420704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.420779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:13.440 [2024-11-19 23:36:59.420791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:24:13.440 [2024-11-19 23:36:59.420807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.420889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.420905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:13.440 [2024-11-19 23:36:59.420919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:13.440 [2024-11-19 23:36:59.420927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.429396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.429446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.440 [2024-11-19 23:36:59.429465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.409 ms 00:24:13.440 [2024-11-19 23:36:59.429479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.429579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.429592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.440 [2024-11-19 23:36:59.429605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:13.440 [2024-11-19 23:36:59.429614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.429669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.429679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:13.440 [2024-11-19 23:36:59.429694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:13.440 [2024-11-19 23:36:59.429702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.429753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:13.440 [2024-11-19 23:36:59.431854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.431920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.440 [2024-11-19 23:36:59.431931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.132 ms 00:24:13.440 [2024-11-19 23:36:59.431939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.431977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.431995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:13.440 [2024-11-19 23:36:59.432003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:13.440 [2024-11-19 23:36:59.432015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.440 [2024-11-19 23:36:59.432053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:13.440 [2024-11-19 23:36:59.432075] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:13.440 [2024-11-19 23:36:59.432114] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:13.440 [2024-11-19 23:36:59.432132] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:13.440 [2024-11-19 23:36:59.432238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:13.440 [2024-11-19 23:36:59.432249] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:13.440 [2024-11-19 23:36:59.432264] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:13.440 [2024-11-19 23:36:59.432279] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:13.440 [2024-11-19 23:36:59.432288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:13.440 [2024-11-19 23:36:59.432297] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:13.440 [2024-11-19 23:36:59.432305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:13.440 [2024-11-19 23:36:59.432313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:13.440 [2024-11-19 23:36:59.432321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:13.440 [2024-11-19 23:36:59.432329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.440 [2024-11-19 23:36:59.432340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:13.440 [2024-11-19 23:36:59.432349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:24:13.441 [2024-11-19 23:36:59.432357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.441 [2024-11-19 23:36:59.432439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.441 [2024-11-19 23:36:59.432451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:13.441 [2024-11-19 23:36:59.432459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:13.441 [2024-11-19 23:36:59.432467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.441 [2024-11-19 23:36:59.432567] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:13.441 [2024-11-19 23:36:59.432579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:13.441 [2024-11-19 23:36:59.432589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:13.441 [2024-11-19 23:36:59.432623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:13.441 [2024-11-19 23:36:59.432650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.441 [2024-11-19 23:36:59.432670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:13.441 [2024-11-19 23:36:59.432678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:13.441 [2024-11-19 23:36:59.432686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:13.441 [2024-11-19 23:36:59.432694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:13.441 [2024-11-19 23:36:59.432705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:13.441 [2024-11-19 23:36:59.432714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:13.441 [2024-11-19 23:36:59.432748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:13.441 [2024-11-19 23:36:59.432774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:13.441 [2024-11-19 23:36:59.432798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:13.441 [2024-11-19 23:36:59.432828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:13.441 [2024-11-19 23:36:59.432852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:13.441 [2024-11-19 23:36:59.432868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:13.441 [2024-11-19 23:36:59.432876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.441 [2024-11-19 23:36:59.432891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:13.441 [2024-11-19 23:36:59.432899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:13.441 [2024-11-19 23:36:59.432907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:13.441 [2024-11-19 23:36:59.432915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:13.441 [2024-11-19 23:36:59.432923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:13.441 [2024-11-19 23:36:59.432931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:13.441 [2024-11-19 23:36:59.432953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:13.441 [2024-11-19 23:36:59.432962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.432970] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:13.441 [2024-11-19 23:36:59.432979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:13.441 [2024-11-19 23:36:59.432989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:13.441 [2024-11-19 23:36:59.433003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:13.441 [2024-11-19 23:36:59.433012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:13.441 [2024-11-19 23:36:59.433019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:13.441 [2024-11-19 23:36:59.433027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:13.441 [2024-11-19 23:36:59.433034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:13.441 [2024-11-19 23:36:59.433041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:13.441 [2024-11-19 23:36:59.433048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:13.441 [2024-11-19 23:36:59.433058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:13.441 [2024-11-19 23:36:59.433068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:13.441 [2024-11-19 23:36:59.433084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:13.441 [2024-11-19 23:36:59.433093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:13.441 [2024-11-19 23:36:59.433100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:13.441 [2024-11-19 23:36:59.433108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:13.441 [2024-11-19 23:36:59.433114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:13.441 [2024-11-19 23:36:59.433121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:13.441 [2024-11-19 23:36:59.433128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:13.441 [2024-11-19 23:36:59.433135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:13.441 [2024-11-19 23:36:59.433142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:13.441 [2024-11-19 23:36:59.433178] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:13.441 [2024-11-19 23:36:59.433187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:13.441 [2024-11-19 23:36:59.433203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:13.441 [2024-11-19 23:36:59.433212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:13.441 [2024-11-19 23:36:59.433221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:13.441 [2024-11-19 23:36:59.433229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.433237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:13.442 [2024-11-19 23:36:59.433245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:24:13.442 [2024-11-19 23:36:59.433255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.448414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.448468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.442 [2024-11-19 23:36:59.448482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.112 ms 00:24:13.442 [2024-11-19 23:36:59.448491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.448589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.448598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:13.442 [2024-11-19 23:36:59.448607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:13.442 [2024-11-19 23:36:59.448615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.475510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.475600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.442 [2024-11-19 23:36:59.475640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.833 ms 00:24:13.442 [2024-11-19 23:36:59.475658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.475779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.475804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.442 [2024-11-19 23:36:59.475831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.442 [2024-11-19 23:36:59.475848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.476565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.476635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.442 [2024-11-19 23:36:59.476656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:24:13.442 [2024-11-19 23:36:59.476673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.476996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.477042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.442 [2024-11-19 23:36:59.477062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:13.442 [2024-11-19 23:36:59.477079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.486082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.486139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.442 [2024-11-19 23:36:59.486156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.961 ms 00:24:13.442 [2024-11-19 23:36:59.486165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.490160] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:13.442 [2024-11-19 23:36:59.490223] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:13.442 [2024-11-19 23:36:59.490237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.490246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:13.442 [2024-11-19 23:36:59.490255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:24:13.442 [2024-11-19 23:36:59.490263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.506674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.506745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:13.442 [2024-11-19 23:36:59.506759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.354 ms 00:24:13.442 [2024-11-19 23:36:59.506767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.509819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.509869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:13.442 [2024-11-19 23:36:59.509879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:24:13.442 [2024-11-19 23:36:59.509886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.512764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.512813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:13.442 [2024-11-19 23:36:59.512833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.830 ms 00:24:13.442 [2024-11-19 23:36:59.512840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.513184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.513204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:13.442 [2024-11-19 23:36:59.513214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:24:13.442 [2024-11-19 23:36:59.513223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.538467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.538529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:13.442 [2024-11-19 23:36:59.538542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.220 ms 00:24:13.442 [2024-11-19 23:36:59.538551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.546701] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:13.442 [2024-11-19 23:36:59.549803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.549859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:13.442 [2024-11-19 23:36:59.549872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.197 ms 00:24:13.442 [2024-11-19 23:36:59.549880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.549960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.549972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:13.442 [2024-11-19 23:36:59.549982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:13.442 [2024-11-19 23:36:59.549990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.550758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.550796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:13.442 [2024-11-19 23:36:59.550815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:24:13.442 [2024-11-19 23:36:59.550824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.550851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.550859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:13.442 [2024-11-19 23:36:59.550867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:13.442 [2024-11-19 23:36:59.550875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.550914] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:13.442 [2024-11-19 23:36:59.550924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.550935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:13.442 [2024-11-19 23:36:59.550943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:13.442 [2024-11-19 23:36:59.550954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.556406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.556455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:13.442 [2024-11-19 23:36:59.556467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:24:13.442 [2024-11-19 23:36:59.556475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.442 [2024-11-19 23:36:59.556560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.442 [2024-11-19 23:36:59.556575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:13.442 [2024-11-19 23:36:59.556584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:13.442 [2024-11-19 23:36:59.556592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.443 [2024-11-19 23:36:59.557846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.888 ms, result 0 00:24:14.839  [2024-11-19T23:37:01.975Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-19T23:37:02.920Z] Copying: 31/1024 [MB] (16 MBps) [2024-11-19T23:37:03.931Z] Copying: 46/1024 [MB] (14 MBps) [2024-11-19T23:37:04.875Z] Copying: 65/1024 [MB] (18 MBps) [2024-11-19T23:37:05.822Z] Copying: 83/1024 [MB] (18 MBps) [2024-11-19T23:37:06.768Z] Copying: 101/1024 [MB] (17 MBps) [2024-11-19T23:37:08.156Z] Copying: 112/1024 [MB] (11 MBps) [2024-11-19T23:37:09.102Z] Copying: 130/1024 [MB] (18 MBps) [2024-11-19T23:37:10.052Z] Copying: 153/1024 [MB] (22 MBps) [2024-11-19T23:37:10.998Z] Copying: 172/1024 [MB] (19 MBps) [2024-11-19T23:37:11.943Z] Copying: 192/1024 [MB] (20 MBps) [2024-11-19T23:37:12.889Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-19T23:37:13.833Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-19T23:37:14.778Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-19T23:37:16.164Z] Copying: 237/1024 [MB] (11 MBps) [2024-11-19T23:37:16.738Z] Copying: 253/1024 [MB] (16 MBps) [2024-11-19T23:37:18.127Z] Copying: 268/1024 [MB] (14 MBps) [2024-11-19T23:37:19.071Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-19T23:37:20.027Z] Copying: 298/1024 [MB] (17 MBps) [2024-11-19T23:37:20.970Z] Copying: 316/1024 [MB] (17 MBps) [2024-11-19T23:37:21.914Z] Copying: 334/1024 [MB] (18 MBps) [2024-11-19T23:37:22.855Z] Copying: 355/1024 [MB] (20 MBps) [2024-11-19T23:37:23.796Z] Copying: 372/1024 [MB] (17 MBps) [2024-11-19T23:37:24.738Z] Copying: 388/1024 [MB] (16 MBps) [2024-11-19T23:37:26.128Z] Copying: 407/1024 [MB] (18 MBps) [2024-11-19T23:37:27.076Z] Copying: 419/1024 [MB] (12 MBps) [2024-11-19T23:37:28.019Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-19T23:37:28.962Z] Copying: 442/1024 [MB] (12 MBps) [2024-11-19T23:37:29.906Z] Copying: 453/1024 [MB] (10 MBps) [2024-11-19T23:37:30.850Z] Copying: 473/1024 [MB] (20 MBps) [2024-11-19T23:37:31.794Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-19T23:37:32.737Z] Copying: 494/1024 [MB] (10 MBps) [2024-11-19T23:37:34.173Z] Copying: 504/1024 [MB] (10 MBps) [2024-11-19T23:37:34.771Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-19T23:37:36.156Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-19T23:37:37.101Z] Copying: 541/1024 [MB] (14 MBps) [2024-11-19T23:37:38.044Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-19T23:37:38.988Z] Copying: 564/1024 [MB] (12 MBps) [2024-11-19T23:37:39.932Z] Copying: 577/1024 [MB] (12 MBps) [2024-11-19T23:37:40.877Z] Copying: 592/1024 [MB] (15 MBps) [2024-11-19T23:37:41.825Z] Copying: 607/1024 [MB] (14 MBps) [2024-11-19T23:37:42.768Z] Copying: 628/1024 [MB] (20 MBps) [2024-11-19T23:37:44.155Z] Copying: 645/1024 [MB] (17 MBps) [2024-11-19T23:37:45.099Z] Copying: 664/1024 [MB] (18 MBps) [2024-11-19T23:37:46.042Z] Copying: 675/1024 [MB] (10 MBps) [2024-11-19T23:37:46.985Z] Copying: 690/1024 [MB] (14 MBps) [2024-11-19T23:37:47.927Z] Copying: 707/1024 [MB] (17 MBps) [2024-11-19T23:37:48.868Z] Copying: 725/1024 [MB] (18 MBps) [2024-11-19T23:37:49.812Z] Copying: 740/1024 [MB] (14 MBps) [2024-11-19T23:37:50.755Z] Copying: 762/1024 [MB] (22 MBps) [2024-11-19T23:37:52.148Z] Copying: 778/1024 [MB] (15 MBps) [2024-11-19T23:37:53.092Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-19T23:37:54.037Z] Copying: 799/1024 [MB] (10 MBps) [2024-11-19T23:37:54.981Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-19T23:37:55.925Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-19T23:37:56.867Z] Copying: 831/1024 [MB] (10 MBps) [2024-11-19T23:37:57.810Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-19T23:37:58.759Z] Copying: 854/1024 [MB] (11 MBps) [2024-11-19T23:38:00.149Z] Copying: 866/1024 [MB] (12 MBps) [2024-11-19T23:38:01.093Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-19T23:38:02.037Z] Copying: 888/1024 [MB] (11 MBps) [2024-11-19T23:38:02.981Z] Copying: 899/1024 [MB] (11 MBps) [2024-11-19T23:38:03.926Z] Copying: 910/1024 [MB] (10 MBps) [2024-11-19T23:38:04.868Z] Copying: 934/1024 [MB] (23 MBps) [2024-11-19T23:38:05.813Z] Copying: 964/1024 [MB] (30 MBps) [2024-11-19T23:38:06.810Z] Copying: 987/1024 [MB] (23 MBps) [2024-11-19T23:38:07.755Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-19T23:38:08.329Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-19T23:38:08.329Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-19 23:38:08.169122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.169225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:22.137 [2024-11-19 23:38:08.169247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:22.137 [2024-11-19 23:38:08.169278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.169311] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:22.137 [2024-11-19 23:38:08.170199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.170252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:22.137 [2024-11-19 23:38:08.170270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:25:22.137 [2024-11-19 23:38:08.170283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.170700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.170768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:22.137 [2024-11-19 23:38:08.170789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:25:22.137 [2024-11-19 23:38:08.170809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.176876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.176923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:22.137 [2024-11-19 23:38:08.176936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.024 ms 00:25:22.137 [2024-11-19 23:38:08.176954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.183247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.183298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:22.137 [2024-11-19 23:38:08.183310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.267 ms 00:25:22.137 [2024-11-19 23:38:08.183319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.186051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.186110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:22.137 [2024-11-19 23:38:08.186121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:25:22.137 [2024-11-19 23:38:08.186129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.191724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.191798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:22.137 [2024-11-19 23:38:08.191809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.567 ms 00:25:22.137 [2024-11-19 23:38:08.191818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.194842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.194909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:22.137 [2024-11-19 23:38:08.194922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:25:22.137 [2024-11-19 23:38:08.194931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.197453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.197514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:22.137 [2024-11-19 23:38:08.197526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:25:22.137 [2024-11-19 23:38:08.197534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.199675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.199748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:22.137 [2024-11-19 23:38:08.199759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:25:22.137 [2024-11-19 23:38:08.199768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.201473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.201526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:22.137 [2024-11-19 23:38:08.201536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.679 ms 00:25:22.137 [2024-11-19 23:38:08.201543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.203091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.137 [2024-11-19 23:38:08.203142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:22.137 [2024-11-19 23:38:08.203152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:25:22.137 [2024-11-19 23:38:08.203159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.137 [2024-11-19 23:38:08.203180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:22.137 [2024-11-19 23:38:08.203195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:22.137 [2024-11-19 23:38:08.203206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:22.137 [2024-11-19 23:38:08.203214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:22.137 [2024-11-19 23:38:08.203397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.203992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:22.138 [2024-11-19 23:38:08.204242] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:22.138 [2024-11-19 23:38:08.204256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2df5abd4-1489-4fe8-8391-19c1c8098aee 00:25:22.138 [2024-11-19 23:38:08.204269] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:22.138 [2024-11-19 23:38:08.204282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:22.138 [2024-11-19 23:38:08.204294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:22.138 [2024-11-19 23:38:08.204326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:22.138 [2024-11-19 23:38:08.204339] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:22.138 [2024-11-19 23:38:08.204352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:22.138 [2024-11-19 23:38:08.204365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:22.138 [2024-11-19 23:38:08.204376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:22.138 [2024-11-19 23:38:08.204386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:22.138 [2024-11-19 23:38:08.204410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.138 [2024-11-19 23:38:08.204437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:22.138 [2024-11-19 23:38:08.204452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:25:22.138 [2024-11-19 23:38:08.204464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.138 [2024-11-19 23:38:08.206861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.138 [2024-11-19 23:38:08.206902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:22.138 [2024-11-19 23:38:08.206914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:25:22.138 [2024-11-19 23:38:08.206922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.138 [2024-11-19 23:38:08.207074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.139 [2024-11-19 23:38:08.207084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:22.139 [2024-11-19 23:38:08.207093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:22.139 [2024-11-19 23:38:08.207100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.214382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.214437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:22.139 [2024-11-19 23:38:08.214448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.214462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.214519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.214527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:22.139 [2024-11-19 23:38:08.214535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.214542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.214587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.214598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:22.139 [2024-11-19 23:38:08.214606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.214613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.214646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.214654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:22.139 [2024-11-19 23:38:08.214664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.214671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.227954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.228012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:22.139 [2024-11-19 23:38:08.228030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.228043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:22.139 [2024-11-19 23:38:08.239191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:22.139 [2024-11-19 23:38:08.239273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:22.139 [2024-11-19 23:38:08.239343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:22.139 [2024-11-19 23:38:08.239447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:22.139 [2024-11-19 23:38:08.239516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:22.139 [2024-11-19 23:38:08.239617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:22.139 [2024-11-19 23:38:08.239711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:22.139 [2024-11-19 23:38:08.239725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:22.139 [2024-11-19 23:38:08.239762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.139 [2024-11-19 23:38:08.239956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.785 ms, result 0 00:25:22.400 00:25:22.400 00:25:22.400 23:38:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:24.948 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:24.948 Process with pid 88777 is not found 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88777 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 88777 ']' 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 88777 00:25:24.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88777) - No such process 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 88777 is not found' 00:25:24.948 23:38:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:24.948 Remove shared memory files 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:24.948 00:25:24.948 real 3m49.487s 00:25:24.948 user 4m9.992s 00:25:24.948 sys 0m25.614s 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.948 ************************************ 00:25:24.948 END TEST ftl_dirty_shutdown 00:25:24.948 23:38:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:24.948 ************************************ 00:25:25.208 23:38:11 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:25.208 23:38:11 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:25.208 23:38:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.208 23:38:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:25.208 ************************************ 00:25:25.208 START TEST ftl_upgrade_shutdown 00:25:25.208 ************************************ 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:25.208 * Looking for test storage... 00:25:25.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.208 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:25.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.209 --rc genhtml_branch_coverage=1 00:25:25.209 --rc genhtml_function_coverage=1 00:25:25.209 --rc genhtml_legend=1 00:25:25.209 --rc geninfo_all_blocks=1 00:25:25.209 --rc geninfo_unexecuted_blocks=1 00:25:25.209 00:25:25.209 ' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:25.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.209 --rc genhtml_branch_coverage=1 00:25:25.209 --rc genhtml_function_coverage=1 00:25:25.209 --rc genhtml_legend=1 00:25:25.209 --rc geninfo_all_blocks=1 00:25:25.209 --rc geninfo_unexecuted_blocks=1 00:25:25.209 00:25:25.209 ' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:25.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.209 --rc genhtml_branch_coverage=1 00:25:25.209 --rc genhtml_function_coverage=1 00:25:25.209 --rc genhtml_legend=1 00:25:25.209 --rc geninfo_all_blocks=1 00:25:25.209 --rc geninfo_unexecuted_blocks=1 00:25:25.209 00:25:25.209 ' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:25.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.209 --rc genhtml_branch_coverage=1 00:25:25.209 --rc genhtml_function_coverage=1 00:25:25.209 --rc genhtml_legend=1 00:25:25.209 --rc geninfo_all_blocks=1 00:25:25.209 --rc geninfo_unexecuted_blocks=1 00:25:25.209 00:25:25.209 ' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91271 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91271 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91271 ']' 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.209 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.210 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.210 23:38:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:25.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.210 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.210 23:38:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:25.470 [2024-11-19 23:38:11.431800] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:25:25.470 [2024-11-19 23:38:11.431996] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91271 ] 00:25:25.470 [2024-11-19 23:38:11.595978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.470 [2024-11-19 23:38:11.625216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:26.417 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:26.679 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:26.940 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:26.940 { 00:25:26.940 "name": "basen1", 00:25:26.940 "aliases": [ 00:25:26.940 "aa235b38-f0c3-43dd-90cb-c48074c49cfe" 00:25:26.940 ], 00:25:26.940 "product_name": "NVMe disk", 00:25:26.940 "block_size": 4096, 00:25:26.940 "num_blocks": 1310720, 00:25:26.940 "uuid": "aa235b38-f0c3-43dd-90cb-c48074c49cfe", 00:25:26.940 "numa_id": -1, 00:25:26.940 "assigned_rate_limits": { 00:25:26.940 "rw_ios_per_sec": 0, 00:25:26.940 "rw_mbytes_per_sec": 0, 00:25:26.940 "r_mbytes_per_sec": 0, 00:25:26.940 "w_mbytes_per_sec": 0 00:25:26.940 }, 00:25:26.940 "claimed": true, 00:25:26.940 "claim_type": "read_many_write_one", 00:25:26.940 "zoned": false, 00:25:26.940 "supported_io_types": { 00:25:26.940 "read": true, 00:25:26.940 "write": true, 00:25:26.940 "unmap": true, 00:25:26.940 "flush": true, 00:25:26.940 "reset": true, 00:25:26.940 "nvme_admin": true, 00:25:26.940 "nvme_io": true, 00:25:26.940 "nvme_io_md": false, 00:25:26.940 "write_zeroes": true, 00:25:26.940 "zcopy": false, 00:25:26.940 "get_zone_info": false, 00:25:26.940 "zone_management": false, 00:25:26.940 "zone_append": false, 00:25:26.940 "compare": true, 00:25:26.940 "compare_and_write": false, 00:25:26.940 "abort": true, 00:25:26.940 "seek_hole": false, 00:25:26.940 "seek_data": false, 00:25:26.940 "copy": true, 00:25:26.940 "nvme_iov_md": false 00:25:26.940 }, 00:25:26.940 "driver_specific": { 00:25:26.940 "nvme": [ 00:25:26.940 { 00:25:26.940 "pci_address": "0000:00:11.0", 00:25:26.940 "trid": { 00:25:26.941 "trtype": "PCIe", 00:25:26.941 "traddr": "0000:00:11.0" 00:25:26.941 }, 00:25:26.941 "ctrlr_data": { 00:25:26.941 "cntlid": 0, 00:25:26.941 "vendor_id": "0x1b36", 00:25:26.941 "model_number": "QEMU NVMe Ctrl", 00:25:26.941 "serial_number": "12341", 00:25:26.941 "firmware_revision": "8.0.0", 00:25:26.941 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:26.941 "oacs": { 00:25:26.941 "security": 0, 00:25:26.941 "format": 1, 00:25:26.941 "firmware": 0, 00:25:26.941 "ns_manage": 1 00:25:26.941 }, 00:25:26.941 "multi_ctrlr": false, 00:25:26.941 "ana_reporting": false 00:25:26.941 }, 00:25:26.941 "vs": { 00:25:26.941 "nvme_version": "1.4" 00:25:26.941 }, 00:25:26.941 "ns_data": { 00:25:26.941 "id": 1, 00:25:26.941 "can_share": false 00:25:26.941 } 00:25:26.941 } 00:25:26.941 ], 00:25:26.941 "mp_policy": "active_passive" 00:25:26.941 } 00:25:26.941 } 00:25:26.941 ]' 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:26.941 23:38:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:27.202 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0c508cf9-24bb-4105-a298-100ec769327c 00:25:27.202 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:27.202 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c508cf9-24bb-4105-a298-100ec769327c 00:25:27.202 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:27.463 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=79adb5c2-d44b-4ed9-b4f7-9db07ee0d408 00:25:27.463 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 79adb5c2-d44b-4ed9-b4f7-9db07ee0d408 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 ]] 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 5120 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:27.724 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:27.985 { 00:25:27.985 "name": "c12c7d1e-8f4a-4ca6-91b3-7042cca6e294", 00:25:27.985 "aliases": [ 00:25:27.985 "lvs/basen1p0" 00:25:27.985 ], 00:25:27.985 "product_name": "Logical Volume", 00:25:27.985 "block_size": 4096, 00:25:27.985 "num_blocks": 5242880, 00:25:27.985 "uuid": "c12c7d1e-8f4a-4ca6-91b3-7042cca6e294", 00:25:27.985 "assigned_rate_limits": { 00:25:27.985 "rw_ios_per_sec": 0, 00:25:27.985 "rw_mbytes_per_sec": 0, 00:25:27.985 "r_mbytes_per_sec": 0, 00:25:27.985 "w_mbytes_per_sec": 0 00:25:27.985 }, 00:25:27.985 "claimed": false, 00:25:27.985 "zoned": false, 00:25:27.985 "supported_io_types": { 00:25:27.985 "read": true, 00:25:27.985 "write": true, 00:25:27.985 "unmap": true, 00:25:27.985 "flush": false, 00:25:27.985 "reset": true, 00:25:27.985 "nvme_admin": false, 00:25:27.985 "nvme_io": false, 00:25:27.985 "nvme_io_md": false, 00:25:27.985 "write_zeroes": true, 00:25:27.985 "zcopy": false, 00:25:27.985 "get_zone_info": false, 00:25:27.985 "zone_management": false, 00:25:27.985 "zone_append": false, 00:25:27.985 "compare": false, 00:25:27.985 "compare_and_write": false, 00:25:27.985 "abort": false, 00:25:27.985 "seek_hole": true, 00:25:27.985 "seek_data": true, 00:25:27.985 "copy": false, 00:25:27.985 "nvme_iov_md": false 00:25:27.985 }, 00:25:27.985 "driver_specific": { 00:25:27.985 "lvol": { 00:25:27.985 "lvol_store_uuid": "79adb5c2-d44b-4ed9-b4f7-9db07ee0d408", 00:25:27.985 "base_bdev": "basen1", 00:25:27.985 "thin_provision": true, 00:25:27.985 "num_allocated_clusters": 0, 00:25:27.985 "snapshot": false, 00:25:27.985 "clone": false, 00:25:27.985 "esnap_clone": false 00:25:27.985 } 00:25:27.985 } 00:25:27.985 } 00:25:27.985 ]' 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:27.985 23:38:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:28.246 23:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:28.246 23:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:28.246 23:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:28.246 23:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:28.246 23:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:28.246 23:38:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d c12c7d1e-8f4a-4ca6-91b3-7042cca6e294 -c cachen1p0 --l2p_dram_limit 2 00:25:28.509 [2024-11-19 23:38:14.595030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.595064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:28.509 [2024-11-19 23:38:14.595074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:28.509 [2024-11-19 23:38:14.595082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.595120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.595131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:28.509 [2024-11-19 23:38:14.595139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:25:28.509 [2024-11-19 23:38:14.595147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.595161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:28.509 [2024-11-19 23:38:14.595345] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:28.509 [2024-11-19 23:38:14.595360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.595367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:28.509 [2024-11-19 23:38:14.595377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:25:28.509 [2024-11-19 23:38:14.595384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.595407] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f5b45ef2-d912-4d0e-aec2-7f43cd25ea7a 00:25:28.509 [2024-11-19 23:38:14.596337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.596354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:28.509 [2024-11-19 23:38:14.596364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:25:28.509 [2024-11-19 23:38:14.596370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.601022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.601044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:28.509 [2024-11-19 23:38:14.601053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.596 ms 00:25:28.509 [2024-11-19 23:38:14.601059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.601095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.601104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:28.509 [2024-11-19 23:38:14.601112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:28.509 [2024-11-19 23:38:14.601117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.601151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.601159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:28.509 [2024-11-19 23:38:14.601166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:28.509 [2024-11-19 23:38:14.601172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.601190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:28.509 [2024-11-19 23:38:14.602457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.602479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:28.509 [2024-11-19 23:38:14.602486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.273 ms 00:25:28.509 [2024-11-19 23:38:14.602493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.602512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.602519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:28.509 [2024-11-19 23:38:14.602525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:28.509 [2024-11-19 23:38:14.602533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.602565] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:28.509 [2024-11-19 23:38:14.602673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:28.509 [2024-11-19 23:38:14.602682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:28.509 [2024-11-19 23:38:14.602696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:28.509 [2024-11-19 23:38:14.602703] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:28.509 [2024-11-19 23:38:14.602714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:28.509 [2024-11-19 23:38:14.602720] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:28.509 [2024-11-19 23:38:14.602747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:28.509 [2024-11-19 23:38:14.602752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:28.509 [2024-11-19 23:38:14.602759] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:28.509 [2024-11-19 23:38:14.602766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.602773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:28.509 [2024-11-19 23:38:14.602779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:25:28.509 [2024-11-19 23:38:14.602787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.602850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.509 [2024-11-19 23:38:14.602860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:28.509 [2024-11-19 23:38:14.602865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:28.509 [2024-11-19 23:38:14.602872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.509 [2024-11-19 23:38:14.602944] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:28.509 [2024-11-19 23:38:14.602953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:28.509 [2024-11-19 23:38:14.602961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:28.509 [2024-11-19 23:38:14.602968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.602973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:28.509 [2024-11-19 23:38:14.602981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.602986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:28.509 [2024-11-19 23:38:14.602992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:28.509 [2024-11-19 23:38:14.602999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:28.509 [2024-11-19 23:38:14.603005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:28.509 [2024-11-19 23:38:14.603018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:28.509 [2024-11-19 23:38:14.603023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:28.509 [2024-11-19 23:38:14.603036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:28.509 [2024-11-19 23:38:14.603043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:28.509 [2024-11-19 23:38:14.603055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:28.509 [2024-11-19 23:38:14.603060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:28.509 [2024-11-19 23:38:14.603072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:28.509 [2024-11-19 23:38:14.603078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:28.509 [2024-11-19 23:38:14.603083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:28.509 [2024-11-19 23:38:14.603090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:28.509 [2024-11-19 23:38:14.603095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:28.509 [2024-11-19 23:38:14.603101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:28.509 [2024-11-19 23:38:14.603107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:28.509 [2024-11-19 23:38:14.603114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:28.509 [2024-11-19 23:38:14.603120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:28.509 [2024-11-19 23:38:14.603128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:28.509 [2024-11-19 23:38:14.603134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:28.509 [2024-11-19 23:38:14.603144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:28.509 [2024-11-19 23:38:14.603150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:28.509 [2024-11-19 23:38:14.603157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:28.509 [2024-11-19 23:38:14.603169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:28.509 [2024-11-19 23:38:14.603175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:28.509 [2024-11-19 23:38:14.603188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:28.509 [2024-11-19 23:38:14.603208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:28.509 [2024-11-19 23:38:14.603214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.509 [2024-11-19 23:38:14.603220] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:28.509 [2024-11-19 23:38:14.603226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:28.510 [2024-11-19 23:38:14.603242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:28.510 [2024-11-19 23:38:14.603249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:28.510 [2024-11-19 23:38:14.603257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:28.510 [2024-11-19 23:38:14.603264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:28.510 [2024-11-19 23:38:14.603272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:28.510 [2024-11-19 23:38:14.603278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:28.510 [2024-11-19 23:38:14.603285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:28.510 [2024-11-19 23:38:14.603290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:28.510 [2024-11-19 23:38:14.603300] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:28.510 [2024-11-19 23:38:14.603309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:28.510 [2024-11-19 23:38:14.603326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:28.510 [2024-11-19 23:38:14.603347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:28.510 [2024-11-19 23:38:14.603353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:28.510 [2024-11-19 23:38:14.603362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:28.510 [2024-11-19 23:38:14.603368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:28.510 [2024-11-19 23:38:14.603416] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:28.510 [2024-11-19 23:38:14.603423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:28.510 [2024-11-19 23:38:14.603437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:28.510 [2024-11-19 23:38:14.603445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:28.510 [2024-11-19 23:38:14.603452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:28.510 [2024-11-19 23:38:14.603459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:28.510 [2024-11-19 23:38:14.603465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:28.510 [2024-11-19 23:38:14.603478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:25:28.510 [2024-11-19 23:38:14.603484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:28.510 [2024-11-19 23:38:14.603514] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:28.510 [2024-11-19 23:38:14.603522] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:32.719 [2024-11-19 23:38:18.676907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.676985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:32.719 [2024-11-19 23:38:18.677006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4073.370 ms 00:25:32.719 [2024-11-19 23:38:18.677016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.690237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.690287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:32.719 [2024-11-19 23:38:18.690305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.099 ms 00:25:32.719 [2024-11-19 23:38:18.690315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.690397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.690408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:32.719 [2024-11-19 23:38:18.690419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:25:32.719 [2024-11-19 23:38:18.690428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.703005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.703048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:32.719 [2024-11-19 23:38:18.703061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.537 ms 00:25:32.719 [2024-11-19 23:38:18.703070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.703108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.703117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:32.719 [2024-11-19 23:38:18.703128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:32.719 [2024-11-19 23:38:18.703136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.703697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.703750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:32.719 [2024-11-19 23:38:18.703771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:25:32.719 [2024-11-19 23:38:18.703781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.703862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.703876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:32.719 [2024-11-19 23:38:18.703892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:25:32.719 [2024-11-19 23:38:18.703901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.712293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.719 [2024-11-19 23:38:18.712330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:32.719 [2024-11-19 23:38:18.712344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.366 ms 00:25:32.719 [2024-11-19 23:38:18.712353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.719 [2024-11-19 23:38:18.722142] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:32.719 [2024-11-19 23:38:18.723429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.723470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:32.720 [2024-11-19 23:38:18.723482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.007 ms 00:25:32.720 [2024-11-19 23:38:18.723493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.755704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.755786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:32.720 [2024-11-19 23:38:18.755804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.178 ms 00:25:32.720 [2024-11-19 23:38:18.755819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.755937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.755951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:32.720 [2024-11-19 23:38:18.755962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:25:32.720 [2024-11-19 23:38:18.755972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.761049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.761103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:32.720 [2024-11-19 23:38:18.761114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.024 ms 00:25:32.720 [2024-11-19 23:38:18.761132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.766606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.766665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:32.720 [2024-11-19 23:38:18.766677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.420 ms 00:25:32.720 [2024-11-19 23:38:18.766688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.767044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.767060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:32.720 [2024-11-19 23:38:18.767070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:25:32.720 [2024-11-19 23:38:18.767090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.814921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.814972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:32.720 [2024-11-19 23:38:18.814985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.788 ms 00:25:32.720 [2024-11-19 23:38:18.815001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.821551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.821600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:32.720 [2024-11-19 23:38:18.821612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.491 ms 00:25:32.720 [2024-11-19 23:38:18.821623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.827186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.827231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:32.720 [2024-11-19 23:38:18.827241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.515 ms 00:25:32.720 [2024-11-19 23:38:18.827251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.833245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.833293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:32.720 [2024-11-19 23:38:18.833304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.947 ms 00:25:32.720 [2024-11-19 23:38:18.833317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.833368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.833385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:32.720 [2024-11-19 23:38:18.833395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:32.720 [2024-11-19 23:38:18.833405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.833476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:32.720 [2024-11-19 23:38:18.833494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:32.720 [2024-11-19 23:38:18.833503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:32.720 [2024-11-19 23:38:18.833513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:32.720 [2024-11-19 23:38:18.835437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4239.939 ms, result 0 00:25:32.720 { 00:25:32.720 "name": "ftl", 00:25:32.720 "uuid": "f5b45ef2-d912-4d0e-aec2-7f43cd25ea7a" 00:25:32.720 } 00:25:32.720 23:38:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:32.981 [2024-11-19 23:38:19.052629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.981 23:38:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:33.242 23:38:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:33.503 [2024-11-19 23:38:19.481047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:33.503 23:38:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:33.503 [2024-11-19 23:38:19.685450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:33.763 23:38:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:34.024 Fill FTL, iteration 1 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91394 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91394 /var/tmp/spdk.tgt.sock 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91394 ']' 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.024 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:34.024 [2024-11-19 23:38:20.109485] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:25:34.024 [2024-11-19 23:38:20.109588] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91394 ] 00:25:34.284 [2024-11-19 23:38:20.267975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.284 [2024-11-19 23:38:20.286215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.855 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.855 23:38:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:34.855 23:38:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:35.116 ftln1 00:25:35.116 23:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:35.116 23:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91394 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91394 ']' 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91394 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91394 00:25:35.376 killing process with pid 91394 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91394' 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91394 00:25:35.376 23:38:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91394 00:25:35.636 23:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:35.636 23:38:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:35.636 [2024-11-19 23:38:21.746612] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:25:35.636 [2024-11-19 23:38:21.747051] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91430 ] 00:25:35.897 [2024-11-19 23:38:21.904590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.897 [2024-11-19 23:38:21.922717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.282  [2024-11-19T23:38:24.422Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-19T23:38:25.364Z] Copying: 402/1024 [MB] (210 MBps) [2024-11-19T23:38:26.324Z] Copying: 650/1024 [MB] (248 MBps) [2024-11-19T23:38:26.897Z] Copying: 900/1024 [MB] (250 MBps) [2024-11-19T23:38:26.897Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:25:40.705 00:25:40.705 Calculate MD5 checksum, iteration 1 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:40.705 23:38:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:40.705 [2024-11-19 23:38:26.809816] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:25:40.705 [2024-11-19 23:38:26.809948] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91483 ] 00:25:40.967 [2024-11-19 23:38:26.964917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.967 [2024-11-19 23:38:26.981382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.353  [2024-11-19T23:38:28.804Z] Copying: 653/1024 [MB] (653 MBps) [2024-11-19T23:38:29.064Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:25:42.872 00:25:42.872 23:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:42.872 23:38:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:44.784 Fill FTL, iteration 2 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dc8897a8cb17bcf18b8741bf340dca76 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:44.784 23:38:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:45.045 [2024-11-19 23:38:31.023392] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:25:45.045 [2024-11-19 23:38:31.023794] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91534 ] 00:25:45.045 [2024-11-19 23:38:31.183189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.045 [2024-11-19 23:38:31.222690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.431  [2024-11-19T23:38:33.562Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-19T23:38:34.497Z] Copying: 391/1024 [MB] (207 MBps) [2024-11-19T23:38:35.871Z] Copying: 626/1024 [MB] (235 MBps) [2024-11-19T23:38:36.130Z] Copying: 877/1024 [MB] (251 MBps) [2024-11-19T23:38:36.390Z] Copying: 1024/1024 [MB] (average 222 MBps) 00:25:50.198 00:25:50.198 Calculate MD5 checksum, iteration 2 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:50.198 23:38:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:50.198 [2024-11-19 23:38:36.293403] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:25:50.198 [2024-11-19 23:38:36.293500] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91588 ] 00:25:50.457 [2024-11-19 23:38:36.441715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.457 [2024-11-19 23:38:36.464120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.860  [2024-11-19T23:38:38.689Z] Copying: 622/1024 [MB] (622 MBps) [2024-11-19T23:38:39.258Z] Copying: 1024/1024 [MB] (average 605 MBps) 00:25:53.066 00:25:53.066 23:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:53.066 23:38:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:55.613 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:55.613 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3aec40cab22347243aa4ea3de33bcca0 00:25:55.613 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:55.613 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:55.613 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:55.613 [2024-11-19 23:38:41.550341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.613 [2024-11-19 23:38:41.550482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:55.613 [2024-11-19 23:38:41.550538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:55.613 [2024-11-19 23:38:41.550557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.613 [2024-11-19 23:38:41.550616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.613 [2024-11-19 23:38:41.550637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:55.613 [2024-11-19 23:38:41.550653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:55.613 [2024-11-19 23:38:41.550705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.613 [2024-11-19 23:38:41.550754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.613 [2024-11-19 23:38:41.550773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:55.613 [2024-11-19 23:38:41.550833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:55.613 [2024-11-19 23:38:41.550877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.613 [2024-11-19 23:38:41.550945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.589 ms, result 0 00:25:55.613 true 00:25:55.613 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:55.613 { 00:25:55.613 "name": "ftl", 00:25:55.613 "properties": [ 00:25:55.613 { 00:25:55.613 "name": "superblock_version", 00:25:55.613 "value": 5, 00:25:55.613 "read-only": true 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "name": "base_device", 00:25:55.613 "bands": [ 00:25:55.613 { 00:25:55.613 "id": 0, 00:25:55.613 "state": "FREE", 00:25:55.613 "validity": 0.0 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "id": 1, 00:25:55.613 "state": "FREE", 00:25:55.613 "validity": 0.0 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "id": 2, 00:25:55.613 "state": "FREE", 00:25:55.613 "validity": 0.0 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "id": 3, 00:25:55.613 "state": "FREE", 00:25:55.613 "validity": 0.0 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "id": 4, 00:25:55.613 "state": "FREE", 00:25:55.613 "validity": 0.0 00:25:55.613 }, 00:25:55.613 { 00:25:55.613 "id": 5, 00:25:55.613 "state": "FREE", 00:25:55.613 "validity": 0.0 00:25:55.613 }, 00:25:55.614 { 00:25:55.614 "id": 6, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 7, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 8, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 9, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 10, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 11, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 12, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 13, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 14, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 15, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 16, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 17, 00:25:55.614 "state": "FREE", 00:25:55.614 "validity": 0.0 00:25:55.614 } 00:25:55.614 ], 00:25:55.614 "read-only": true 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "name": "cache_device", 00:25:55.614 "type": "bdev", 00:25:55.614 "chunks": [ 00:25:55.614 { 00:25:55.614 "id": 0, 00:25:55.614 "state": "INACTIVE", 00:25:55.614 "utilization": 0.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 1, 00:25:55.614 "state": "CLOSED", 00:25:55.614 "utilization": 1.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 2, 00:25:55.614 "state": "CLOSED", 00:25:55.614 "utilization": 1.0 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 3, 00:25:55.614 "state": "OPEN", 00:25:55.614 "utilization": 0.001953125 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "id": 4, 00:25:55.614 "state": "OPEN", 00:25:55.614 "utilization": 0.0 00:25:55.614 } 00:25:55.614 ], 00:25:55.614 "read-only": true 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "name": "verbose_mode", 00:25:55.614 "value": true, 00:25:55.614 "unit": "", 00:25:55.614 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:55.614 }, 00:25:55.614 { 00:25:55.614 "name": "prep_upgrade_on_shutdown", 00:25:55.614 "value": false, 00:25:55.614 "unit": "", 00:25:55.614 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:55.614 } 00:25:55.614 ] 00:25:55.614 } 00:25:55.614 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:55.875 [2024-11-19 23:38:41.950621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.875 [2024-11-19 23:38:41.950727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:55.875 [2024-11-19 23:38:41.950778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:55.875 [2024-11-19 23:38:41.950795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.875 [2024-11-19 23:38:41.950826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.875 [2024-11-19 23:38:41.950842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:55.875 [2024-11-19 23:38:41.950856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:55.875 [2024-11-19 23:38:41.950871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.875 [2024-11-19 23:38:41.950893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.875 [2024-11-19 23:38:41.950909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:55.875 [2024-11-19 23:38:41.950924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:55.875 [2024-11-19 23:38:41.950968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.876 [2024-11-19 23:38:41.951026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.390 ms, result 0 00:25:55.876 true 00:25:55.876 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:55.876 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:55.876 23:38:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.136 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:56.136 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:56.136 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:56.396 [2024-11-19 23:38:42.326943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.396 [2024-11-19 23:38:42.327063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:56.396 [2024-11-19 23:38:42.327108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:56.396 [2024-11-19 23:38:42.327125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.396 [2024-11-19 23:38:42.327156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.396 [2024-11-19 23:38:42.327172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:56.396 [2024-11-19 23:38:42.327187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:56.396 [2024-11-19 23:38:42.327202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.396 [2024-11-19 23:38:42.327224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.396 [2024-11-19 23:38:42.327240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:56.396 [2024-11-19 23:38:42.327254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:56.396 [2024-11-19 23:38:42.327296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.396 [2024-11-19 23:38:42.327352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.397 ms, result 0 00:25:56.396 true 00:25:56.396 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.396 { 00:25:56.396 "name": "ftl", 00:25:56.396 "properties": [ 00:25:56.396 { 00:25:56.396 "name": "superblock_version", 00:25:56.396 "value": 5, 00:25:56.396 "read-only": true 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "name": "base_device", 00:25:56.396 "bands": [ 00:25:56.396 { 00:25:56.396 "id": 0, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 1, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 2, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 3, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 4, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 5, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 6, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 7, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 8, 00:25:56.396 "state": "FREE", 00:25:56.396 "validity": 0.0 00:25:56.396 }, 00:25:56.396 { 00:25:56.396 "id": 9, 00:25:56.396 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 10, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 11, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 12, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 13, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 14, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 15, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 16, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 17, 00:25:56.397 "state": "FREE", 00:25:56.397 "validity": 0.0 00:25:56.397 } 00:25:56.397 ], 00:25:56.397 "read-only": true 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "name": "cache_device", 00:25:56.397 "type": "bdev", 00:25:56.397 "chunks": [ 00:25:56.397 { 00:25:56.397 "id": 0, 00:25:56.397 "state": "INACTIVE", 00:25:56.397 "utilization": 0.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 1, 00:25:56.397 "state": "CLOSED", 00:25:56.397 "utilization": 1.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 2, 00:25:56.397 "state": "CLOSED", 00:25:56.397 "utilization": 1.0 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 3, 00:25:56.397 "state": "OPEN", 00:25:56.397 "utilization": 0.001953125 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "id": 4, 00:25:56.397 "state": "OPEN", 00:25:56.397 "utilization": 0.0 00:25:56.397 } 00:25:56.397 ], 00:25:56.397 "read-only": true 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "name": "verbose_mode", 00:25:56.397 "value": true, 00:25:56.397 "unit": "", 00:25:56.397 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:56.397 }, 00:25:56.397 { 00:25:56.397 "name": "prep_upgrade_on_shutdown", 00:25:56.397 "value": true, 00:25:56.397 "unit": "", 00:25:56.397 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:56.397 } 00:25:56.397 ] 00:25:56.397 } 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91271 ]] 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91271 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91271 ']' 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91271 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91271 00:25:56.397 killing process with pid 91271 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91271' 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91271 00:25:56.397 23:38:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91271 00:25:56.657 [2024-11-19 23:38:42.654230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:56.657 [2024-11-19 23:38:42.658056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.657 [2024-11-19 23:38:42.658092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:56.657 [2024-11-19 23:38:42.658102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:56.657 [2024-11-19 23:38:42.658108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.657 [2024-11-19 23:38:42.658135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:56.657 [2024-11-19 23:38:42.658516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.657 [2024-11-19 23:38:42.658539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:56.658 [2024-11-19 23:38:42.658550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.370 ms 00:25:56.658 [2024-11-19 23:38:42.658556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.228928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.228979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:06.657 [2024-11-19 23:38:51.228993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8570.324 ms 00:26:06.657 [2024-11-19 23:38:51.229000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.230131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.230145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:06.657 [2024-11-19 23:38:51.230153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.118 ms 00:26:06.657 [2024-11-19 23:38:51.230159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.231146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.231166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:06.657 [2024-11-19 23:38:51.231178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.849 ms 00:26:06.657 [2024-11-19 23:38:51.231184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.233450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.233568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:06.657 [2024-11-19 23:38:51.233580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.232 ms 00:26:06.657 [2024-11-19 23:38:51.233585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.235644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.235669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:06.657 [2024-11-19 23:38:51.235676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.023 ms 00:26:06.657 [2024-11-19 23:38:51.235682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.235749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.235757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:06.657 [2024-11-19 23:38:51.235763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:26:06.657 [2024-11-19 23:38:51.235769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.236877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.236903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:06.657 [2024-11-19 23:38:51.236909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.087 ms 00:26:06.657 [2024-11-19 23:38:51.236914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.657 [2024-11-19 23:38:51.238135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.657 [2024-11-19 23:38:51.238160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:06.657 [2024-11-19 23:38:51.238167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.197 ms 00:26:06.657 [2024-11-19 23:38:51.238173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.239188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.658 [2024-11-19 23:38:51.239214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:06.658 [2024-11-19 23:38:51.239220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.993 ms 00:26:06.658 [2024-11-19 23:38:51.239225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.240159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.658 [2024-11-19 23:38:51.240265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:06.658 [2024-11-19 23:38:51.240277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.889 ms 00:26:06.658 [2024-11-19 23:38:51.240282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.240304] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:06.658 [2024-11-19 23:38:51.240314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:06.658 [2024-11-19 23:38:51.240322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:06.658 [2024-11-19 23:38:51.240328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:06.658 [2024-11-19 23:38:51.240334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:06.658 [2024-11-19 23:38:51.240419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:06.658 [2024-11-19 23:38:51.240425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f5b45ef2-d912-4d0e-aec2-7f43cd25ea7a 00:26:06.658 [2024-11-19 23:38:51.240431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:06.658 [2024-11-19 23:38:51.240442] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:06.658 [2024-11-19 23:38:51.240447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:06.658 [2024-11-19 23:38:51.240455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:06.658 [2024-11-19 23:38:51.240460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:06.658 [2024-11-19 23:38:51.240467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:06.658 [2024-11-19 23:38:51.240472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:06.658 [2024-11-19 23:38:51.240477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:06.658 [2024-11-19 23:38:51.240482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:06.658 [2024-11-19 23:38:51.240488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.658 [2024-11-19 23:38:51.240498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:06.658 [2024-11-19 23:38:51.240504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:26:06.658 [2024-11-19 23:38:51.240510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.241768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.658 [2024-11-19 23:38:51.241794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:06.658 [2024-11-19 23:38:51.241801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.247 ms 00:26:06.658 [2024-11-19 23:38:51.241807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.241875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.658 [2024-11-19 23:38:51.241881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:06.658 [2024-11-19 23:38:51.241889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:06.658 [2024-11-19 23:38:51.241895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.246385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.246414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:06.658 [2024-11-19 23:38:51.246422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.246428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.246449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.246455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:06.658 [2024-11-19 23:38:51.246461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.246467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.246514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.246524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:06.658 [2024-11-19 23:38:51.246530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.246536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.246548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.246554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:06.658 [2024-11-19 23:38:51.246564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.246570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.254841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.254974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:06.658 [2024-11-19 23:38:51.255050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.255068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.261886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.261989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:06.658 [2024-11-19 23:38:51.262039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.262057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.262101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.262171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:06.658 [2024-11-19 23:38:51.262194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.262208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.262257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.262276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:06.658 [2024-11-19 23:38:51.262291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.262305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.262444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.262463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:06.658 [2024-11-19 23:38:51.262509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.262533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.262570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.262625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:06.658 [2024-11-19 23:38:51.262675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.262719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.262780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.658 [2024-11-19 23:38:51.262828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:06.658 [2024-11-19 23:38:51.262846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.658 [2024-11-19 23:38:51.262873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.658 [2024-11-19 23:38:51.262918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:06.659 [2024-11-19 23:38:51.262963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:06.659 [2024-11-19 23:38:51.262980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:06.659 [2024-11-19 23:38:51.263003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.659 [2024-11-19 23:38:51.263110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8605.004 ms, result 0 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91776 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91776 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91776 ']' 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.209 23:38:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:09.209 [2024-11-19 23:38:54.940857] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:09.209 [2024-11-19 23:38:54.941168] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91776 ] 00:26:09.209 [2024-11-19 23:38:55.099189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.209 [2024-11-19 23:38:55.123783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.209 [2024-11-19 23:38:55.372453] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:09.209 [2024-11-19 23:38:55.372661] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:09.470 [2024-11-19 23:38:55.510355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.470 [2024-11-19 23:38:55.510497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:09.470 [2024-11-19 23:38:55.510518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:09.470 [2024-11-19 23:38:55.510527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.470 [2024-11-19 23:38:55.510588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.510599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:09.471 [2024-11-19 23:38:55.510612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:09.471 [2024-11-19 23:38:55.510619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.510644] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:09.471 [2024-11-19 23:38:55.510891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:09.471 [2024-11-19 23:38:55.510909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.510917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:09.471 [2024-11-19 23:38:55.510925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.274 ms 00:26:09.471 [2024-11-19 23:38:55.510932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.511980] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:09.471 [2024-11-19 23:38:55.514576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.514618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:09.471 [2024-11-19 23:38:55.514628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.598 ms 00:26:09.471 [2024-11-19 23:38:55.514636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.514686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.514695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:09.471 [2024-11-19 23:38:55.514703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:09.471 [2024-11-19 23:38:55.514714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.519560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.519589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:09.471 [2024-11-19 23:38:55.519598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.776 ms 00:26:09.471 [2024-11-19 23:38:55.519612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.519652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.519660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:09.471 [2024-11-19 23:38:55.519668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:09.471 [2024-11-19 23:38:55.519678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.519719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.519748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:09.471 [2024-11-19 23:38:55.519755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:09.471 [2024-11-19 23:38:55.519762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.519784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:09.471 [2024-11-19 23:38:55.521172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.521203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:09.471 [2024-11-19 23:38:55.521212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.394 ms 00:26:09.471 [2024-11-19 23:38:55.521219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.521246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.521257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:09.471 [2024-11-19 23:38:55.521266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:09.471 [2024-11-19 23:38:55.521273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.521292] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:09.471 [2024-11-19 23:38:55.521309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:09.471 [2024-11-19 23:38:55.521342] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:09.471 [2024-11-19 23:38:55.521358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:09.471 [2024-11-19 23:38:55.521462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:09.471 [2024-11-19 23:38:55.521476] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:09.471 [2024-11-19 23:38:55.521488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:09.471 [2024-11-19 23:38:55.521498] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:09.471 [2024-11-19 23:38:55.521506] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:09.471 [2024-11-19 23:38:55.521514] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:09.471 [2024-11-19 23:38:55.521521] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:09.471 [2024-11-19 23:38:55.521529] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:09.471 [2024-11-19 23:38:55.521536] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:09.471 [2024-11-19 23:38:55.521543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.521550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:09.471 [2024-11-19 23:38:55.521562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:26:09.471 [2024-11-19 23:38:55.521572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.521655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.471 [2024-11-19 23:38:55.521663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:09.471 [2024-11-19 23:38:55.521673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:26:09.471 [2024-11-19 23:38:55.521680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.471 [2024-11-19 23:38:55.521795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:09.471 [2024-11-19 23:38:55.521806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:09.471 [2024-11-19 23:38:55.521813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:09.471 [2024-11-19 23:38:55.521824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.521832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:09.471 [2024-11-19 23:38:55.521840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.521848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:09.471 [2024-11-19 23:38:55.521856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:09.471 [2024-11-19 23:38:55.521864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:09.471 [2024-11-19 23:38:55.521872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.521881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:09.471 [2024-11-19 23:38:55.521889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:09.471 [2024-11-19 23:38:55.521896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.521904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:09.471 [2024-11-19 23:38:55.521912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:09.471 [2024-11-19 23:38:55.521924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.521937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:09.471 [2024-11-19 23:38:55.521945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:09.471 [2024-11-19 23:38:55.521953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.521960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:09.471 [2024-11-19 23:38:55.521968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:09.471 [2024-11-19 23:38:55.521975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:09.471 [2024-11-19 23:38:55.521982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:09.471 [2024-11-19 23:38:55.521990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:09.471 [2024-11-19 23:38:55.521998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:09.471 [2024-11-19 23:38:55.522005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:09.471 [2024-11-19 23:38:55.522013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:09.471 [2024-11-19 23:38:55.522020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:09.471 [2024-11-19 23:38:55.522027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:09.471 [2024-11-19 23:38:55.522034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:09.471 [2024-11-19 23:38:55.522041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:09.471 [2024-11-19 23:38:55.522048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:09.471 [2024-11-19 23:38:55.522058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:09.471 [2024-11-19 23:38:55.522065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.522073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:09.471 [2024-11-19 23:38:55.522080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:09.471 [2024-11-19 23:38:55.522087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.522094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:09.471 [2024-11-19 23:38:55.522101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:09.471 [2024-11-19 23:38:55.522109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.522116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:09.471 [2024-11-19 23:38:55.522123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:09.471 [2024-11-19 23:38:55.522131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.471 [2024-11-19 23:38:55.522139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:09.472 [2024-11-19 23:38:55.522148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:09.472 [2024-11-19 23:38:55.522156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:09.472 [2024-11-19 23:38:55.522164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:09.472 [2024-11-19 23:38:55.522172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:09.472 [2024-11-19 23:38:55.522182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:09.472 [2024-11-19 23:38:55.522189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:09.472 [2024-11-19 23:38:55.522197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:09.472 [2024-11-19 23:38:55.522203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:09.472 [2024-11-19 23:38:55.522209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:09.472 [2024-11-19 23:38:55.522218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:09.472 [2024-11-19 23:38:55.522227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:09.472 [2024-11-19 23:38:55.522242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:09.472 [2024-11-19 23:38:55.522262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:09.472 [2024-11-19 23:38:55.522269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:09.472 [2024-11-19 23:38:55.522276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:09.472 [2024-11-19 23:38:55.522282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:09.472 [2024-11-19 23:38:55.522332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:09.472 [2024-11-19 23:38:55.522340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:09.472 [2024-11-19 23:38:55.522355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:09.472 [2024-11-19 23:38:55.522362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:09.472 [2024-11-19 23:38:55.522369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:09.472 [2024-11-19 23:38:55.522377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:09.472 [2024-11-19 23:38:55.522386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:09.472 [2024-11-19 23:38:55.522393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.667 ms 00:26:09.472 [2024-11-19 23:38:55.522400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:09.472 [2024-11-19 23:38:55.522449] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:09.472 [2024-11-19 23:38:55.522462] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:13.680 [2024-11-19 23:38:59.252757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.680 [2024-11-19 23:38:59.252805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:13.680 [2024-11-19 23:38:59.252819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3730.294 ms 00:26:13.680 [2024-11-19 23:38:59.252826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.680 [2024-11-19 23:38:59.260244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.680 [2024-11-19 23:38:59.260279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:13.680 [2024-11-19 23:38:59.260288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.347 ms 00:26:13.680 [2024-11-19 23:38:59.260294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.680 [2024-11-19 23:38:59.260346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.680 [2024-11-19 23:38:59.260353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:13.680 [2024-11-19 23:38:59.260360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:13.680 [2024-11-19 23:38:59.260365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.680 [2024-11-19 23:38:59.267770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.680 [2024-11-19 23:38:59.267805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:13.680 [2024-11-19 23:38:59.267813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.377 ms 00:26:13.680 [2024-11-19 23:38:59.267832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.680 [2024-11-19 23:38:59.267854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.267860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:13.681 [2024-11-19 23:38:59.267867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:13.681 [2024-11-19 23:38:59.267875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.268172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.268196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:13.681 [2024-11-19 23:38:59.268208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:26:13.681 [2024-11-19 23:38:59.268214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.268249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.268256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:13.681 [2024-11-19 23:38:59.268261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:13.681 [2024-11-19 23:38:59.268267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.272980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.273006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:13.681 [2024-11-19 23:38:59.273013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.691 ms 00:26:13.681 [2024-11-19 23:38:59.273019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.275047] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:13.681 [2024-11-19 23:38:59.275077] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:13.681 [2024-11-19 23:38:59.275086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.275092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:13.681 [2024-11-19 23:38:59.275098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.003 ms 00:26:13.681 [2024-11-19 23:38:59.275104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.278207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.278363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:13.681 [2024-11-19 23:38:59.278376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.073 ms 00:26:13.681 [2024-11-19 23:38:59.278382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.279725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.279840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:13.681 [2024-11-19 23:38:59.279890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.310 ms 00:26:13.681 [2024-11-19 23:38:59.279909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.281122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.281214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:13.681 [2024-11-19 23:38:59.281260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.171 ms 00:26:13.681 [2024-11-19 23:38:59.281268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.281506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.281523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:13.681 [2024-11-19 23:38:59.281529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:26:13.681 [2024-11-19 23:38:59.281535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.310787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.310948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:13.681 [2024-11-19 23:38:59.311014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.237 ms 00:26:13.681 [2024-11-19 23:38:59.311038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.317924] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:13.681 [2024-11-19 23:38:59.318561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.318641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:13.681 [2024-11-19 23:38:59.318682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.475 ms 00:26:13.681 [2024-11-19 23:38:59.318705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.318779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.318838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:13.681 [2024-11-19 23:38:59.318857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:13.681 [2024-11-19 23:38:59.318872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.318913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.318921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:13.681 [2024-11-19 23:38:59.318931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:13.681 [2024-11-19 23:38:59.318940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.318959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.318966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:13.681 [2024-11-19 23:38:59.318972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:13.681 [2024-11-19 23:38:59.318980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.319002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:13.681 [2024-11-19 23:38:59.319014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.319020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:13.681 [2024-11-19 23:38:59.319026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:13.681 [2024-11-19 23:38:59.319033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.321950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.321991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:13.681 [2024-11-19 23:38:59.322000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.900 ms 00:26:13.681 [2024-11-19 23:38:59.322006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.322067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.322078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:13.681 [2024-11-19 23:38:59.322087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:13.681 [2024-11-19 23:38:59.322092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.322804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3812.131 ms, result 0 00:26:13.681 [2024-11-19 23:38:59.335098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.681 [2024-11-19 23:38:59.351087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:13.681 [2024-11-19 23:38:59.359174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:13.681 23:38:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.681 23:38:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:13.681 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:13.681 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:13.681 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:13.681 [2024-11-19 23:38:59.599237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.599276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:13.681 [2024-11-19 23:38:59.599286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:13.681 [2024-11-19 23:38:59.599292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.599313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.599320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:13.681 [2024-11-19 23:38:59.599327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:13.681 [2024-11-19 23:38:59.599335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.599349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.681 [2024-11-19 23:38:59.599355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:13.681 [2024-11-19 23:38:59.599361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:13.681 [2024-11-19 23:38:59.599367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.681 [2024-11-19 23:38:59.599409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.165 ms, result 0 00:26:13.681 true 00:26:13.681 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:13.681 { 00:26:13.681 "name": "ftl", 00:26:13.681 "properties": [ 00:26:13.681 { 00:26:13.681 "name": "superblock_version", 00:26:13.681 "value": 5, 00:26:13.681 "read-only": true 00:26:13.681 }, 00:26:13.681 { 00:26:13.681 "name": "base_device", 00:26:13.681 "bands": [ 00:26:13.681 { 00:26:13.681 "id": 0, 00:26:13.681 "state": "CLOSED", 00:26:13.681 "validity": 1.0 00:26:13.681 }, 00:26:13.681 { 00:26:13.681 "id": 1, 00:26:13.681 "state": "CLOSED", 00:26:13.681 "validity": 1.0 00:26:13.681 }, 00:26:13.681 { 00:26:13.681 "id": 2, 00:26:13.681 "state": "CLOSED", 00:26:13.681 "validity": 0.007843137254901933 00:26:13.681 }, 00:26:13.681 { 00:26:13.681 "id": 3, 00:26:13.681 "state": "FREE", 00:26:13.681 "validity": 0.0 00:26:13.681 }, 00:26:13.681 { 00:26:13.681 "id": 4, 00:26:13.681 "state": "FREE", 00:26:13.681 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 5, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 6, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 7, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 8, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 9, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 10, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 11, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 12, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 13, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 14, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 15, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 16, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 17, 00:26:13.682 "state": "FREE", 00:26:13.682 "validity": 0.0 00:26:13.682 } 00:26:13.682 ], 00:26:13.682 "read-only": true 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "name": "cache_device", 00:26:13.682 "type": "bdev", 00:26:13.682 "chunks": [ 00:26:13.682 { 00:26:13.682 "id": 0, 00:26:13.682 "state": "INACTIVE", 00:26:13.682 "utilization": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 1, 00:26:13.682 "state": "OPEN", 00:26:13.682 "utilization": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 2, 00:26:13.682 "state": "OPEN", 00:26:13.682 "utilization": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 3, 00:26:13.682 "state": "FREE", 00:26:13.682 "utilization": 0.0 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "id": 4, 00:26:13.682 "state": "FREE", 00:26:13.682 "utilization": 0.0 00:26:13.682 } 00:26:13.682 ], 00:26:13.682 "read-only": true 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "name": "verbose_mode", 00:26:13.682 "value": true, 00:26:13.682 "unit": "", 00:26:13.682 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:13.682 }, 00:26:13.682 { 00:26:13.682 "name": "prep_upgrade_on_shutdown", 00:26:13.682 "value": false, 00:26:13.682 "unit": "", 00:26:13.682 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:13.682 } 00:26:13.682 ] 00:26:13.682 } 00:26:13.682 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:13.682 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:13.682 23:38:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:13.944 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:13.944 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:13.944 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:13.944 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:13.944 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:14.205 Validate MD5 checksum, iteration 1 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:14.205 23:39:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:14.205 [2024-11-19 23:39:00.292711] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:14.205 [2024-11-19 23:39:00.293155] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91846 ] 00:26:14.467 [2024-11-19 23:39:00.448046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.467 [2024-11-19 23:39:00.472317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.855  [2024-11-19T23:39:02.991Z] Copying: 530/1024 [MB] (530 MBps) [2024-11-19T23:39:02.991Z] Copying: 1000/1024 [MB] (470 MBps) [2024-11-19T23:39:03.934Z] Copying: 1024/1024 [MB] (average 500 MBps) 00:26:17.742 00:26:17.742 23:39:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:17.742 23:39:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:19.659 Validate MD5 checksum, iteration 2 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dc8897a8cb17bcf18b8741bf340dca76 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dc8897a8cb17bcf18b8741bf340dca76 != \d\c\8\8\9\7\a\8\c\b\1\7\b\c\f\1\8\b\8\7\4\1\b\f\3\4\0\d\c\a\7\6 ]] 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:19.659 23:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:19.922 [2024-11-19 23:39:05.859401] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:19.922 [2024-11-19 23:39:05.859654] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91908 ] 00:26:19.922 [2024-11-19 23:39:06.015917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.922 [2024-11-19 23:39:06.040148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.312  [2024-11-19T23:39:08.084Z] Copying: 597/1024 [MB] (597 MBps) [2024-11-19T23:39:08.656Z] Copying: 1024/1024 [MB] (average 611 MBps) 00:26:22.464 00:26:22.464 23:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:22.464 23:39:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3aec40cab22347243aa4ea3de33bcca0 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3aec40cab22347243aa4ea3de33bcca0 != \3\a\e\c\4\0\c\a\b\2\2\3\4\7\2\4\3\a\a\4\e\a\3\d\e\3\3\b\c\c\a\0 ]] 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 91776 ]] 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 91776 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91964 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91964 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91964 ']' 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.009 23:39:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:25.009 [2024-11-19 23:39:10.884343] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:25.009 [2024-11-19 23:39:10.884482] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91964 ] 00:26:25.009 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 91776 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:25.009 [2024-11-19 23:39:11.047273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.009 [2024-11-19 23:39:11.073761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.270 [2024-11-19 23:39:11.418620] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:25.270 [2024-11-19 23:39:11.418711] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:25.532 [2024-11-19 23:39:11.571675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.532 [2024-11-19 23:39:11.571751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:25.532 [2024-11-19 23:39:11.571774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:25.532 [2024-11-19 23:39:11.571784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.532 [2024-11-19 23:39:11.571870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.532 [2024-11-19 23:39:11.571883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:25.532 [2024-11-19 23:39:11.571896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:26:25.532 [2024-11-19 23:39:11.571904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.532 [2024-11-19 23:39:11.571936] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:25.532 [2024-11-19 23:39:11.572256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:25.532 [2024-11-19 23:39:11.572292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.532 [2024-11-19 23:39:11.572303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:25.532 [2024-11-19 23:39:11.572321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.367 ms 00:26:25.532 [2024-11-19 23:39:11.572330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.572670] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:25.533 [2024-11-19 23:39:11.580459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.580513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:25.533 [2024-11-19 23:39:11.580535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.788 ms 00:26:25.533 [2024-11-19 23:39:11.580545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.582434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.582489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:25.533 [2024-11-19 23:39:11.582502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:25.533 [2024-11-19 23:39:11.582515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.582883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.582898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:25.533 [2024-11-19 23:39:11.582910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:26:25.533 [2024-11-19 23:39:11.582919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.582962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.582971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:25.533 [2024-11-19 23:39:11.582981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:25.533 [2024-11-19 23:39:11.582991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.583023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.583039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:25.533 [2024-11-19 23:39:11.583053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:25.533 [2024-11-19 23:39:11.583064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.583089] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:25.533 [2024-11-19 23:39:11.584482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.584537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:25.533 [2024-11-19 23:39:11.584549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.399 ms 00:26:25.533 [2024-11-19 23:39:11.584558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.584599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.584613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:25.533 [2024-11-19 23:39:11.584623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:25.533 [2024-11-19 23:39:11.584631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.584654] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:25.533 [2024-11-19 23:39:11.584679] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:25.533 [2024-11-19 23:39:11.584723] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:25.533 [2024-11-19 23:39:11.584780] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:25.533 [2024-11-19 23:39:11.584896] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:25.533 [2024-11-19 23:39:11.584909] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:25.533 [2024-11-19 23:39:11.584921] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:25.533 [2024-11-19 23:39:11.584934] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:25.533 [2024-11-19 23:39:11.584944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:25.533 [2024-11-19 23:39:11.584953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:25.533 [2024-11-19 23:39:11.584961] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:25.533 [2024-11-19 23:39:11.584970] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:25.533 [2024-11-19 23:39:11.584977] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:25.533 [2024-11-19 23:39:11.584986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.584995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:25.533 [2024-11-19 23:39:11.585008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:26:25.533 [2024-11-19 23:39:11.585015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.585100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.533 [2024-11-19 23:39:11.585119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:25.533 [2024-11-19 23:39:11.585130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:25.533 [2024-11-19 23:39:11.585138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.533 [2024-11-19 23:39:11.585244] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:25.533 [2024-11-19 23:39:11.585258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:25.533 [2024-11-19 23:39:11.585269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:25.533 [2024-11-19 23:39:11.585300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:25.533 [2024-11-19 23:39:11.585316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:25.533 [2024-11-19 23:39:11.585328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:25.533 [2024-11-19 23:39:11.585337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:25.533 [2024-11-19 23:39:11.585355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:25.533 [2024-11-19 23:39:11.585363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:25.533 [2024-11-19 23:39:11.585394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:25.533 [2024-11-19 23:39:11.585402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:25.533 [2024-11-19 23:39:11.585418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:25.533 [2024-11-19 23:39:11.585428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:25.533 [2024-11-19 23:39:11.585448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:25.533 [2024-11-19 23:39:11.585458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:25.533 [2024-11-19 23:39:11.585475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:25.533 [2024-11-19 23:39:11.585483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:25.533 [2024-11-19 23:39:11.585501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:25.533 [2024-11-19 23:39:11.585508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:25.533 [2024-11-19 23:39:11.585525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:25.533 [2024-11-19 23:39:11.585532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:25.533 [2024-11-19 23:39:11.585548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:25.533 [2024-11-19 23:39:11.585555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:25.533 [2024-11-19 23:39:11.585570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:25.533 [2024-11-19 23:39:11.585592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:25.533 [2024-11-19 23:39:11.585614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:25.533 [2024-11-19 23:39:11.585620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585628] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:25.533 [2024-11-19 23:39:11.585638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:25.533 [2024-11-19 23:39:11.585654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:25.533 [2024-11-19 23:39:11.585662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:25.533 [2024-11-19 23:39:11.585670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:25.533 [2024-11-19 23:39:11.585676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:25.533 [2024-11-19 23:39:11.585685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:25.533 [2024-11-19 23:39:11.585692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:25.533 [2024-11-19 23:39:11.585699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:25.533 [2024-11-19 23:39:11.585714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:25.534 [2024-11-19 23:39:11.585723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:25.534 [2024-11-19 23:39:11.586147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.586190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:25.534 [2024-11-19 23:39:11.586220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.586248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.586279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:25.534 [2024-11-19 23:39:11.586309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:25.534 [2024-11-19 23:39:11.586339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:25.534 [2024-11-19 23:39:11.586893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:25.534 [2024-11-19 23:39:11.586968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:25.534 [2024-11-19 23:39:11.587336] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:25.534 [2024-11-19 23:39:11.587346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.534 [2024-11-19 23:39:11.587362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:25.534 [2024-11-19 23:39:11.587371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:25.534 [2024-11-19 23:39:11.587378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:25.534 [2024-11-19 23:39:11.587390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.587401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:25.534 [2024-11-19 23:39:11.587425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.215 ms 00:26:25.534 [2024-11-19 23:39:11.587434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.603421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.603480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:25.534 [2024-11-19 23:39:11.603495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.857 ms 00:26:25.534 [2024-11-19 23:39:11.603515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.603565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.603577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:25.534 [2024-11-19 23:39:11.603587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:25.534 [2024-11-19 23:39:11.603599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.621841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.621894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:25.534 [2024-11-19 23:39:11.621909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.172 ms 00:26:25.534 [2024-11-19 23:39:11.621919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.621964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.621974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:25.534 [2024-11-19 23:39:11.621989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:25.534 [2024-11-19 23:39:11.622005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.622126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.622141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:25.534 [2024-11-19 23:39:11.622151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:25.534 [2024-11-19 23:39:11.622160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.622211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.622220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:25.534 [2024-11-19 23:39:11.622230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:25.534 [2024-11-19 23:39:11.622242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.634527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.634709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:25.534 [2024-11-19 23:39:11.634877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.252 ms 00:26:25.534 [2024-11-19 23:39:11.634908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.635060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.635092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:25.534 [2024-11-19 23:39:11.635183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:25.534 [2024-11-19 23:39:11.635217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.660881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.661139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:25.534 [2024-11-19 23:39:11.661384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.626 ms 00:26:25.534 [2024-11-19 23:39:11.661413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.663840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.663886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:25.534 [2024-11-19 23:39:11.663901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.415 ms 00:26:25.534 [2024-11-19 23:39:11.663920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.694967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.695019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:25.534 [2024-11-19 23:39:11.695032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.975 ms 00:26:25.534 [2024-11-19 23:39:11.695042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.695226] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:25.534 [2024-11-19 23:39:11.695369] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:25.534 [2024-11-19 23:39:11.695505] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:25.534 [2024-11-19 23:39:11.695650] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:25.534 [2024-11-19 23:39:11.695663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.695674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:25.534 [2024-11-19 23:39:11.695689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:26:25.534 [2024-11-19 23:39:11.695699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.695794] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:25.534 [2024-11-19 23:39:11.695809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.695848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:25.534 [2024-11-19 23:39:11.695859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:25.534 [2024-11-19 23:39:11.695869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.701539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.701588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:25.534 [2024-11-19 23:39:11.701600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.643 ms 00:26:25.534 [2024-11-19 23:39:11.701613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.702770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.702809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:25.534 [2024-11-19 23:39:11.702823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:25.534 [2024-11-19 23:39:11.702832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:25.534 [2024-11-19 23:39:11.702895] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:25.534 [2024-11-19 23:39:11.703165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:25.534 [2024-11-19 23:39:11.703189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:25.534 [2024-11-19 23:39:11.703202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:26:25.534 [2024-11-19 23:39:11.703211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.477 [2024-11-19 23:39:12.405543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.477 [2024-11-19 23:39:12.405624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:26.477 [2024-11-19 23:39:12.405643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 701.882 ms 00:26:26.477 [2024-11-19 23:39:12.405666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.477 [2024-11-19 23:39:12.407615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.477 [2024-11-19 23:39:12.407663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:26.477 [2024-11-19 23:39:12.407687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.345 ms 00:26:26.477 [2024-11-19 23:39:12.407697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.477 [2024-11-19 23:39:12.408237] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:26.477 [2024-11-19 23:39:12.408291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.477 [2024-11-19 23:39:12.408302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:26.477 [2024-11-19 23:39:12.408314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:26:26.477 [2024-11-19 23:39:12.408324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.477 [2024-11-19 23:39:12.408486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.477 [2024-11-19 23:39:12.408534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:26.477 [2024-11-19 23:39:12.408546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:26.477 [2024-11-19 23:39:12.408555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.477 [2024-11-19 23:39:12.408604] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 705.703 ms, result 0 00:26:26.477 [2024-11-19 23:39:12.408656] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:26.477 [2024-11-19 23:39:12.408792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.477 [2024-11-19 23:39:12.408806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:26.477 [2024-11-19 23:39:12.408815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.137 ms 00:26:26.477 [2024-11-19 23:39:12.408824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.071338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.071419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:27.050 [2024-11-19 23:39:13.071437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 661.901 ms 00:26:27.050 [2024-11-19 23:39:13.071446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.073293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.073339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:27.050 [2024-11-19 23:39:13.073350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.188 ms 00:26:27.050 [2024-11-19 23:39:13.073358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.073892] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:27.050 [2024-11-19 23:39:13.073931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.073941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:27.050 [2024-11-19 23:39:13.073951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:26:27.050 [2024-11-19 23:39:13.073959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.073996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.074005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:27.050 [2024-11-19 23:39:13.074015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:27.050 [2024-11-19 23:39:13.074023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.074081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 665.418 ms, result 0 00:26:27.050 [2024-11-19 23:39:13.074128] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:27.050 [2024-11-19 23:39:13.074140] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:27.050 [2024-11-19 23:39:13.074151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.074160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:27.050 [2024-11-19 23:39:13.074170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1371.273 ms 00:26:27.050 [2024-11-19 23:39:13.074182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.074214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.074224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:27.050 [2024-11-19 23:39:13.074233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:27.050 [2024-11-19 23:39:13.074241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.083849] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:27.050 [2024-11-19 23:39:13.083995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.084007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:27.050 [2024-11-19 23:39:13.084021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.736 ms 00:26:27.050 [2024-11-19 23:39:13.084030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.084781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.084808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:27.050 [2024-11-19 23:39:13.084819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.659 ms 00:26:27.050 [2024-11-19 23:39:13.084827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.087066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.087092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:27.050 [2024-11-19 23:39:13.087102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.218 ms 00:26:27.050 [2024-11-19 23:39:13.087114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.087159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.087169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:27.050 [2024-11-19 23:39:13.087178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:27.050 [2024-11-19 23:39:13.087186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.087301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.087311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:27.050 [2024-11-19 23:39:13.087323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:27.050 [2024-11-19 23:39:13.087331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.087354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.087362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:27.050 [2024-11-19 23:39:13.087370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:27.050 [2024-11-19 23:39:13.087377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.087411] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:27.050 [2024-11-19 23:39:13.087421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.087429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:27.050 [2024-11-19 23:39:13.087441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:27.050 [2024-11-19 23:39:13.087470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.087525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.050 [2024-11-19 23:39:13.087535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:27.050 [2024-11-19 23:39:13.087544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:27.050 [2024-11-19 23:39:13.087551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.050 [2024-11-19 23:39:13.088897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1516.684 ms, result 0 00:26:27.050 [2024-11-19 23:39:13.104534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.050 [2024-11-19 23:39:13.120523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:27.050 [2024-11-19 23:39:13.128642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:27.314 Validate MD5 checksum, iteration 1 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:27.314 23:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:27.576 [2024-11-19 23:39:13.529275] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:27.576 [2024-11-19 23:39:13.529447] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91999 ] 00:26:27.576 [2024-11-19 23:39:13.692083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.576 [2024-11-19 23:39:13.732846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.962  [2024-11-19T23:39:16.093Z] Copying: 536/1024 [MB] (536 MBps) [2024-11-19T23:39:16.661Z] Copying: 1024/1024 [MB] (average 547 MBps) 00:26:30.469 00:26:30.469 23:39:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:30.469 23:39:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:33.016 Validate MD5 checksum, iteration 2 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dc8897a8cb17bcf18b8741bf340dca76 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dc8897a8cb17bcf18b8741bf340dca76 != \d\c\8\8\9\7\a\8\c\b\1\7\b\c\f\1\8\b\8\7\4\1\b\f\3\4\0\d\c\a\7\6 ]] 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:33.016 23:39:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:33.016 [2024-11-19 23:39:18.854712] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:33.016 [2024-11-19 23:39:18.854850] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92060 ] 00:26:33.016 [2024-11-19 23:39:19.014038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.016 [2024-11-19 23:39:19.038107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.476  [2024-11-19T23:39:20.930Z] Copying: 698/1024 [MB] (698 MBps) [2024-11-19T23:39:25.137Z] Copying: 1024/1024 [MB] (average 679 MBps) 00:26:38.945 00:26:38.945 23:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:38.945 23:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3aec40cab22347243aa4ea3de33bcca0 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3aec40cab22347243aa4ea3de33bcca0 != \3\a\e\c\4\0\c\a\b\2\2\3\4\7\2\4\3\a\a\4\e\a\3\d\e\3\3\b\c\c\a\0 ]] 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91964 ]] 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91964 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91964 ']' 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 91964 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91964 00:26:40.857 killing process with pid 91964 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91964' 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 91964 00:26:40.857 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 91964 00:26:40.857 [2024-11-19 23:39:26.760855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:40.857 [2024-11-19 23:39:26.765052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.765085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:40.857 [2024-11-19 23:39:26.765095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:40.857 [2024-11-19 23:39:26.765102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.765119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:40.857 [2024-11-19 23:39:26.765484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.765495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:40.857 [2024-11-19 23:39:26.765505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.355 ms 00:26:40.857 [2024-11-19 23:39:26.765511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.765691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.765699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:40.857 [2024-11-19 23:39:26.765705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:26:40.857 [2024-11-19 23:39:26.765710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.766858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.766881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:40.857 [2024-11-19 23:39:26.766888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.136 ms 00:26:40.857 [2024-11-19 23:39:26.766898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.767771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.767788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:40.857 [2024-11-19 23:39:26.767795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.847 ms 00:26:40.857 [2024-11-19 23:39:26.767802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.769107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.769135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:40.857 [2024-11-19 23:39:26.769142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.270 ms 00:26:40.857 [2024-11-19 23:39:26.769151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.770212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.770239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:40.857 [2024-11-19 23:39:26.770251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:26:40.857 [2024-11-19 23:39:26.770258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.770316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.770323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:40.857 [2024-11-19 23:39:26.770332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:40.857 [2024-11-19 23:39:26.770341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.771626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.771652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:40.857 [2024-11-19 23:39:26.771659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:26:40.857 [2024-11-19 23:39:26.771664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.857 [2024-11-19 23:39:26.774058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.857 [2024-11-19 23:39:26.774166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:40.858 [2024-11-19 23:39:26.774200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.361 ms 00:26:40.858 [2024-11-19 23:39:26.774222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.776116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.858 [2024-11-19 23:39:26.776167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:40.858 [2024-11-19 23:39:26.776182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.804 ms 00:26:40.858 [2024-11-19 23:39:26.776194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.777663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.858 [2024-11-19 23:39:26.777710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:40.858 [2024-11-19 23:39:26.777726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.347 ms 00:26:40.858 [2024-11-19 23:39:26.777755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.777803] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:40.858 [2024-11-19 23:39:26.777825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:40.858 [2024-11-19 23:39:26.777842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:40.858 [2024-11-19 23:39:26.777855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:40.858 [2024-11-19 23:39:26.777870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.777987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.778000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.778013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.778025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.778038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.778050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:40.858 [2024-11-19 23:39:26.778066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:40.858 [2024-11-19 23:39:26.778078] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f5b45ef2-d912-4d0e-aec2-7f43cd25ea7a 00:26:40.858 [2024-11-19 23:39:26.778091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:40.858 [2024-11-19 23:39:26.778103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:40.858 [2024-11-19 23:39:26.778115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:40.858 [2024-11-19 23:39:26.778127] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:40.858 [2024-11-19 23:39:26.778139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:40.858 [2024-11-19 23:39:26.778152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:40.858 [2024-11-19 23:39:26.778164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:40.858 [2024-11-19 23:39:26.778175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:40.858 [2024-11-19 23:39:26.778186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:40.858 [2024-11-19 23:39:26.778198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.858 [2024-11-19 23:39:26.778218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:40.858 [2024-11-19 23:39:26.778232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:26:40.858 [2024-11-19 23:39:26.778244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.780017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.858 [2024-11-19 23:39:26.780063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:40.858 [2024-11-19 23:39:26.780078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.745 ms 00:26:40.858 [2024-11-19 23:39:26.780091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.780198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.858 [2024-11-19 23:39:26.780212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:40.858 [2024-11-19 23:39:26.780226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:26:40.858 [2024-11-19 23:39:26.780239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.787071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.787117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:40.858 [2024-11-19 23:39:26.787133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.787146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.787194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.787208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:40.858 [2024-11-19 23:39:26.787220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.787232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.787338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.787355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:40.858 [2024-11-19 23:39:26.787369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.787381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.787418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.787435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:40.858 [2024-11-19 23:39:26.787453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.787465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.795567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.795600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:40.858 [2024-11-19 23:39:26.795608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.795615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.801698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.801738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:40.858 [2024-11-19 23:39:26.801747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.801753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.801788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.801795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:40.858 [2024-11-19 23:39:26.801801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.801806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.801851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.801858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:40.858 [2024-11-19 23:39:26.801866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.801872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.801925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.801933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:40.858 [2024-11-19 23:39:26.801939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.801944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.801967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.801975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:40.858 [2024-11-19 23:39:26.801980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.801991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.802021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.802028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:40.858 [2024-11-19 23:39:26.802034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.802039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.802073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:40.858 [2024-11-19 23:39:26.802080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:40.858 [2024-11-19 23:39:26.802088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:40.858 [2024-11-19 23:39:26.802094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.858 [2024-11-19 23:39:26.802186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 37.111 ms, result 0 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:40.858 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:40.859 Remove shared memory files 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid91776 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:40.859 00:26:40.859 real 1m15.811s 00:26:40.859 user 1m40.702s 00:26:40.859 sys 0m20.586s 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:40.859 ************************************ 00:26:40.859 END TEST ftl_upgrade_shutdown 00:26:40.859 ************************************ 00:26:40.859 23:39:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:40.859 23:39:27 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:40.859 23:39:27 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:40.859 23:39:27 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:26:40.859 23:39:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.859 23:39:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:40.859 ************************************ 00:26:40.859 START TEST ftl_restore_fast 00:26:40.859 ************************************ 00:26:40.859 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:41.121 * Looking for test storage... 00:26:41.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.121 --rc genhtml_branch_coverage=1 00:26:41.121 --rc genhtml_function_coverage=1 00:26:41.121 --rc genhtml_legend=1 00:26:41.121 --rc geninfo_all_blocks=1 00:26:41.121 --rc geninfo_unexecuted_blocks=1 00:26:41.121 00:26:41.121 ' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.121 --rc genhtml_branch_coverage=1 00:26:41.121 --rc genhtml_function_coverage=1 00:26:41.121 --rc genhtml_legend=1 00:26:41.121 --rc geninfo_all_blocks=1 00:26:41.121 --rc geninfo_unexecuted_blocks=1 00:26:41.121 00:26:41.121 ' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.121 --rc genhtml_branch_coverage=1 00:26:41.121 --rc genhtml_function_coverage=1 00:26:41.121 --rc genhtml_legend=1 00:26:41.121 --rc geninfo_all_blocks=1 00:26:41.121 --rc geninfo_unexecuted_blocks=1 00:26:41.121 00:26:41.121 ' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.121 --rc genhtml_branch_coverage=1 00:26:41.121 --rc genhtml_function_coverage=1 00:26:41.121 --rc genhtml_legend=1 00:26:41.121 --rc geninfo_all_blocks=1 00:26:41.121 --rc geninfo_unexecuted_blocks=1 00:26:41.121 00:26:41.121 ' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Z3HPUPr3z6 00:26:41.121 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92224 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92224 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 92224 ']' 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:41.122 23:39:27 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:41.122 [2024-11-19 23:39:27.275886] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:26:41.122 [2024-11-19 23:39:27.276021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92224 ] 00:26:41.383 [2024-11-19 23:39:27.425079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.383 [2024-11-19 23:39:27.442236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:41.955 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:41.956 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:42.216 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:42.216 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:42.216 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:42.216 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:42.217 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:42.217 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:26:42.217 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:26:42.217 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:42.477 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:42.477 { 00:26:42.477 "name": "nvme0n1", 00:26:42.477 "aliases": [ 00:26:42.477 "4e0badcf-a3d5-4b98-9463-c234c2e32393" 00:26:42.477 ], 00:26:42.477 "product_name": "NVMe disk", 00:26:42.478 "block_size": 4096, 00:26:42.478 "num_blocks": 1310720, 00:26:42.478 "uuid": "4e0badcf-a3d5-4b98-9463-c234c2e32393", 00:26:42.478 "numa_id": -1, 00:26:42.478 "assigned_rate_limits": { 00:26:42.478 "rw_ios_per_sec": 0, 00:26:42.478 "rw_mbytes_per_sec": 0, 00:26:42.478 "r_mbytes_per_sec": 0, 00:26:42.478 "w_mbytes_per_sec": 0 00:26:42.478 }, 00:26:42.478 "claimed": true, 00:26:42.478 "claim_type": "read_many_write_one", 00:26:42.478 "zoned": false, 00:26:42.478 "supported_io_types": { 00:26:42.478 "read": true, 00:26:42.478 "write": true, 00:26:42.478 "unmap": true, 00:26:42.478 "flush": true, 00:26:42.478 "reset": true, 00:26:42.478 "nvme_admin": true, 00:26:42.478 "nvme_io": true, 00:26:42.478 "nvme_io_md": false, 00:26:42.478 "write_zeroes": true, 00:26:42.478 "zcopy": false, 00:26:42.478 "get_zone_info": false, 00:26:42.478 "zone_management": false, 00:26:42.478 "zone_append": false, 00:26:42.478 "compare": true, 00:26:42.478 "compare_and_write": false, 00:26:42.478 "abort": true, 00:26:42.478 "seek_hole": false, 00:26:42.478 "seek_data": false, 00:26:42.478 "copy": true, 00:26:42.478 "nvme_iov_md": false 00:26:42.478 }, 00:26:42.478 "driver_specific": { 00:26:42.478 "nvme": [ 00:26:42.478 { 00:26:42.478 "pci_address": "0000:00:11.0", 00:26:42.478 "trid": { 00:26:42.478 "trtype": "PCIe", 00:26:42.478 "traddr": "0000:00:11.0" 00:26:42.478 }, 00:26:42.478 "ctrlr_data": { 00:26:42.478 "cntlid": 0, 00:26:42.478 "vendor_id": "0x1b36", 00:26:42.478 "model_number": "QEMU NVMe Ctrl", 00:26:42.478 "serial_number": "12341", 00:26:42.478 "firmware_revision": "8.0.0", 00:26:42.478 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:42.478 "oacs": { 00:26:42.478 "security": 0, 00:26:42.478 "format": 1, 00:26:42.478 "firmware": 0, 00:26:42.478 "ns_manage": 1 00:26:42.478 }, 00:26:42.478 "multi_ctrlr": false, 00:26:42.478 "ana_reporting": false 00:26:42.478 }, 00:26:42.478 "vs": { 00:26:42.478 "nvme_version": "1.4" 00:26:42.478 }, 00:26:42.478 "ns_data": { 00:26:42.478 "id": 1, 00:26:42.478 "can_share": false 00:26:42.478 } 00:26:42.478 } 00:26:42.478 ], 00:26:42.478 "mp_policy": "active_passive" 00:26:42.478 } 00:26:42.478 } 00:26:42.478 ]' 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:42.478 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:42.739 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=79adb5c2-d44b-4ed9-b4f7-9db07ee0d408 00:26:42.739 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:42.739 23:39:28 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79adb5c2-d44b-4ed9-b4f7-9db07ee0d408 00:26:43.000 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:43.261 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=c3d67121-0071-4e8b-a402-64e620f1d9bc 00:26:43.261 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c3d67121-0071-4e8b-a402-64e620f1d9bc 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:26:43.522 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:43.790 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:43.790 { 00:26:43.790 "name": "9760d3f8-c3a2-43ac-9e54-82a82616881d", 00:26:43.790 "aliases": [ 00:26:43.790 "lvs/nvme0n1p0" 00:26:43.790 ], 00:26:43.790 "product_name": "Logical Volume", 00:26:43.790 "block_size": 4096, 00:26:43.790 "num_blocks": 26476544, 00:26:43.790 "uuid": "9760d3f8-c3a2-43ac-9e54-82a82616881d", 00:26:43.790 "assigned_rate_limits": { 00:26:43.790 "rw_ios_per_sec": 0, 00:26:43.790 "rw_mbytes_per_sec": 0, 00:26:43.790 "r_mbytes_per_sec": 0, 00:26:43.790 "w_mbytes_per_sec": 0 00:26:43.790 }, 00:26:43.790 "claimed": false, 00:26:43.790 "zoned": false, 00:26:43.790 "supported_io_types": { 00:26:43.790 "read": true, 00:26:43.790 "write": true, 00:26:43.790 "unmap": true, 00:26:43.790 "flush": false, 00:26:43.790 "reset": true, 00:26:43.790 "nvme_admin": false, 00:26:43.790 "nvme_io": false, 00:26:43.790 "nvme_io_md": false, 00:26:43.790 "write_zeroes": true, 00:26:43.790 "zcopy": false, 00:26:43.790 "get_zone_info": false, 00:26:43.790 "zone_management": false, 00:26:43.790 "zone_append": false, 00:26:43.790 "compare": false, 00:26:43.790 "compare_and_write": false, 00:26:43.790 "abort": false, 00:26:43.790 "seek_hole": true, 00:26:43.790 "seek_data": true, 00:26:43.790 "copy": false, 00:26:43.790 "nvme_iov_md": false 00:26:43.790 }, 00:26:43.790 "driver_specific": { 00:26:43.790 "lvol": { 00:26:43.790 "lvol_store_uuid": "c3d67121-0071-4e8b-a402-64e620f1d9bc", 00:26:43.791 "base_bdev": "nvme0n1", 00:26:43.791 "thin_provision": true, 00:26:43.791 "num_allocated_clusters": 0, 00:26:43.791 "snapshot": false, 00:26:43.791 "clone": false, 00:26:43.791 "esnap_clone": false 00:26:43.791 } 00:26:43.791 } 00:26:43.791 } 00:26:43.791 ]' 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:43.791 23:39:29 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:26:44.050 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:44.312 { 00:26:44.312 "name": "9760d3f8-c3a2-43ac-9e54-82a82616881d", 00:26:44.312 "aliases": [ 00:26:44.312 "lvs/nvme0n1p0" 00:26:44.312 ], 00:26:44.312 "product_name": "Logical Volume", 00:26:44.312 "block_size": 4096, 00:26:44.312 "num_blocks": 26476544, 00:26:44.312 "uuid": "9760d3f8-c3a2-43ac-9e54-82a82616881d", 00:26:44.312 "assigned_rate_limits": { 00:26:44.312 "rw_ios_per_sec": 0, 00:26:44.312 "rw_mbytes_per_sec": 0, 00:26:44.312 "r_mbytes_per_sec": 0, 00:26:44.312 "w_mbytes_per_sec": 0 00:26:44.312 }, 00:26:44.312 "claimed": false, 00:26:44.312 "zoned": false, 00:26:44.312 "supported_io_types": { 00:26:44.312 "read": true, 00:26:44.312 "write": true, 00:26:44.312 "unmap": true, 00:26:44.312 "flush": false, 00:26:44.312 "reset": true, 00:26:44.312 "nvme_admin": false, 00:26:44.312 "nvme_io": false, 00:26:44.312 "nvme_io_md": false, 00:26:44.312 "write_zeroes": true, 00:26:44.312 "zcopy": false, 00:26:44.312 "get_zone_info": false, 00:26:44.312 "zone_management": false, 00:26:44.312 "zone_append": false, 00:26:44.312 "compare": false, 00:26:44.312 "compare_and_write": false, 00:26:44.312 "abort": false, 00:26:44.312 "seek_hole": true, 00:26:44.312 "seek_data": true, 00:26:44.312 "copy": false, 00:26:44.312 "nvme_iov_md": false 00:26:44.312 }, 00:26:44.312 "driver_specific": { 00:26:44.312 "lvol": { 00:26:44.312 "lvol_store_uuid": "c3d67121-0071-4e8b-a402-64e620f1d9bc", 00:26:44.312 "base_bdev": "nvme0n1", 00:26:44.312 "thin_provision": true, 00:26:44.312 "num_allocated_clusters": 0, 00:26:44.312 "snapshot": false, 00:26:44.312 "clone": false, 00:26:44.312 "esnap_clone": false 00:26:44.312 } 00:26:44.312 } 00:26:44.312 } 00:26:44.312 ]' 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:44.312 23:39:30 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9760d3f8-c3a2-43ac-9e54-82a82616881d 00:26:44.573 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:44.573 { 00:26:44.573 "name": "9760d3f8-c3a2-43ac-9e54-82a82616881d", 00:26:44.573 "aliases": [ 00:26:44.573 "lvs/nvme0n1p0" 00:26:44.573 ], 00:26:44.573 "product_name": "Logical Volume", 00:26:44.573 "block_size": 4096, 00:26:44.573 "num_blocks": 26476544, 00:26:44.573 "uuid": "9760d3f8-c3a2-43ac-9e54-82a82616881d", 00:26:44.573 "assigned_rate_limits": { 00:26:44.573 "rw_ios_per_sec": 0, 00:26:44.573 "rw_mbytes_per_sec": 0, 00:26:44.573 "r_mbytes_per_sec": 0, 00:26:44.573 "w_mbytes_per_sec": 0 00:26:44.573 }, 00:26:44.573 "claimed": false, 00:26:44.573 "zoned": false, 00:26:44.574 "supported_io_types": { 00:26:44.574 "read": true, 00:26:44.574 "write": true, 00:26:44.574 "unmap": true, 00:26:44.574 "flush": false, 00:26:44.574 "reset": true, 00:26:44.574 "nvme_admin": false, 00:26:44.574 "nvme_io": false, 00:26:44.574 "nvme_io_md": false, 00:26:44.574 "write_zeroes": true, 00:26:44.574 "zcopy": false, 00:26:44.574 "get_zone_info": false, 00:26:44.574 "zone_management": false, 00:26:44.574 "zone_append": false, 00:26:44.574 "compare": false, 00:26:44.574 "compare_and_write": false, 00:26:44.574 "abort": false, 00:26:44.574 "seek_hole": true, 00:26:44.574 "seek_data": true, 00:26:44.574 "copy": false, 00:26:44.574 "nvme_iov_md": false 00:26:44.574 }, 00:26:44.574 "driver_specific": { 00:26:44.574 "lvol": { 00:26:44.574 "lvol_store_uuid": "c3d67121-0071-4e8b-a402-64e620f1d9bc", 00:26:44.574 "base_bdev": "nvme0n1", 00:26:44.574 "thin_provision": true, 00:26:44.574 "num_allocated_clusters": 0, 00:26:44.574 "snapshot": false, 00:26:44.574 "clone": false, 00:26:44.574 "esnap_clone": false 00:26:44.574 } 00:26:44.574 } 00:26:44.574 } 00:26:44.574 ]' 00:26:44.574 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9760d3f8-c3a2-43ac-9e54-82a82616881d --l2p_dram_limit 10' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:44.836 23:39:30 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9760d3f8-c3a2-43ac-9e54-82a82616881d --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:44.836 [2024-11-19 23:39:31.003242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.836 [2024-11-19 23:39:31.003282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:44.836 [2024-11-19 23:39:31.003294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:44.836 [2024-11-19 23:39:31.003304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.836 [2024-11-19 23:39:31.003346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.836 [2024-11-19 23:39:31.003355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.836 [2024-11-19 23:39:31.003366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:44.836 [2024-11-19 23:39:31.003375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.836 [2024-11-19 23:39:31.003392] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:44.837 [2024-11-19 23:39:31.003600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:44.837 [2024-11-19 23:39:31.003612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.003622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.837 [2024-11-19 23:39:31.003628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:26:44.837 [2024-11-19 23:39:31.003635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.003658] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d249e450-9437-4263-84c5-287c10346aed 00:26:44.837 [2024-11-19 23:39:31.004620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.004638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:44.837 [2024-11-19 23:39:31.004650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:44.837 [2024-11-19 23:39:31.004656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.009433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.009456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.837 [2024-11-19 23:39:31.009468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:26:44.837 [2024-11-19 23:39:31.009475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.009536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.009545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.837 [2024-11-19 23:39:31.009553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:44.837 [2024-11-19 23:39:31.009559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.009594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.009602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:44.837 [2024-11-19 23:39:31.009609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:44.837 [2024-11-19 23:39:31.009617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.009635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:44.837 [2024-11-19 23:39:31.010921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.010943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.837 [2024-11-19 23:39:31.010951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:26:44.837 [2024-11-19 23:39:31.010958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.010983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.010994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:44.837 [2024-11-19 23:39:31.011002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:44.837 [2024-11-19 23:39:31.011011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.011023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:44.837 [2024-11-19 23:39:31.011133] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:44.837 [2024-11-19 23:39:31.011142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:44.837 [2024-11-19 23:39:31.011157] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:44.837 [2024-11-19 23:39:31.011165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011175] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011181] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:44.837 [2024-11-19 23:39:31.011190] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:44.837 [2024-11-19 23:39:31.011195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:44.837 [2024-11-19 23:39:31.011202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:44.837 [2024-11-19 23:39:31.011209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.011216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:44.837 [2024-11-19 23:39:31.011222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:26:44.837 [2024-11-19 23:39:31.011229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.011292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.837 [2024-11-19 23:39:31.011305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:44.837 [2024-11-19 23:39:31.011311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:44.837 [2024-11-19 23:39:31.011318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.837 [2024-11-19 23:39:31.011390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:44.837 [2024-11-19 23:39:31.011399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:44.837 [2024-11-19 23:39:31.011405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:44.837 [2024-11-19 23:39:31.011425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:44.837 [2024-11-19 23:39:31.011443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.837 [2024-11-19 23:39:31.011455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:44.837 [2024-11-19 23:39:31.011461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:44.837 [2024-11-19 23:39:31.011466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:44.837 [2024-11-19 23:39:31.011477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:44.837 [2024-11-19 23:39:31.011482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:44.837 [2024-11-19 23:39:31.011488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:44.837 [2024-11-19 23:39:31.011499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:44.837 [2024-11-19 23:39:31.011516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:44.837 [2024-11-19 23:39:31.011533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:44.837 [2024-11-19 23:39:31.011549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:44.837 [2024-11-19 23:39:31.011567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:44.837 [2024-11-19 23:39:31.011586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.837 [2024-11-19 23:39:31.011600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:44.837 [2024-11-19 23:39:31.011607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:44.837 [2024-11-19 23:39:31.011613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:44.837 [2024-11-19 23:39:31.011620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:44.837 [2024-11-19 23:39:31.011626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:44.837 [2024-11-19 23:39:31.011633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:44.837 [2024-11-19 23:39:31.011646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:44.837 [2024-11-19 23:39:31.011652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011659] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:44.837 [2024-11-19 23:39:31.011668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:44.837 [2024-11-19 23:39:31.011681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:44.837 [2024-11-19 23:39:31.011687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:44.837 [2024-11-19 23:39:31.011695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:44.837 [2024-11-19 23:39:31.011701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:44.837 [2024-11-19 23:39:31.011709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:44.838 [2024-11-19 23:39:31.011715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:44.838 [2024-11-19 23:39:31.011722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:44.838 [2024-11-19 23:39:31.011727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:44.838 [2024-11-19 23:39:31.011750] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:44.838 [2024-11-19 23:39:31.011761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:44.838 [2024-11-19 23:39:31.011776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:44.838 [2024-11-19 23:39:31.011784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:44.838 [2024-11-19 23:39:31.011790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:44.838 [2024-11-19 23:39:31.011798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:44.838 [2024-11-19 23:39:31.011804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:44.838 [2024-11-19 23:39:31.011821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:44.838 [2024-11-19 23:39:31.011828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:44.838 [2024-11-19 23:39:31.011835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:44.838 [2024-11-19 23:39:31.011842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:44.838 [2024-11-19 23:39:31.011878] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:44.838 [2024-11-19 23:39:31.011886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:44.838 [2024-11-19 23:39:31.011900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:44.838 [2024-11-19 23:39:31.011907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:44.838 [2024-11-19 23:39:31.011914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:44.838 [2024-11-19 23:39:31.011922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.838 [2024-11-19 23:39:31.011930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:44.838 [2024-11-19 23:39:31.011941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:26:44.838 [2024-11-19 23:39:31.011948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.838 [2024-11-19 23:39:31.011981] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:44.838 [2024-11-19 23:39:31.011988] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:49.041 [2024-11-19 23:39:34.609623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.041 [2024-11-19 23:39:34.609671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:49.041 [2024-11-19 23:39:34.609684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3597.627 ms 00:26:49.041 [2024-11-19 23:39:34.609692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.041 [2024-11-19 23:39:34.617362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.041 [2024-11-19 23:39:34.617395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.041 [2024-11-19 23:39:34.617405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.601 ms 00:26:49.041 [2024-11-19 23:39:34.617411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.041 [2024-11-19 23:39:34.617496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.041 [2024-11-19 23:39:34.617503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:49.041 [2024-11-19 23:39:34.617511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:49.041 [2024-11-19 23:39:34.617517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.041 [2024-11-19 23:39:34.624941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.041 [2024-11-19 23:39:34.624971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.041 [2024-11-19 23:39:34.624981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.395 ms 00:26:49.041 [2024-11-19 23:39:34.624988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.041 [2024-11-19 23:39:34.625015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.041 [2024-11-19 23:39:34.625024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.042 [2024-11-19 23:39:34.625032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:49.042 [2024-11-19 23:39:34.625038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.625353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.625372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.042 [2024-11-19 23:39:34.625380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:26:49.042 [2024-11-19 23:39:34.625386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.625470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.625479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.042 [2024-11-19 23:39:34.625488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:49.042 [2024-11-19 23:39:34.625494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.630294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.630315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.042 [2024-11-19 23:39:34.630323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.784 ms 00:26:49.042 [2024-11-19 23:39:34.630329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.636914] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:49.042 [2024-11-19 23:39:34.639150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.639173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:49.042 [2024-11-19 23:39:34.639181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.767 ms 00:26:49.042 [2024-11-19 23:39:34.639189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.708240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.708290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:49.042 [2024-11-19 23:39:34.708305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.030 ms 00:26:49.042 [2024-11-19 23:39:34.708318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.708494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.708507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:49.042 [2024-11-19 23:39:34.708516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:26:49.042 [2024-11-19 23:39:34.708525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.712327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.712368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:49.042 [2024-11-19 23:39:34.712380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.784 ms 00:26:49.042 [2024-11-19 23:39:34.712392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.715935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.715966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:49.042 [2024-11-19 23:39:34.715975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.506 ms 00:26:49.042 [2024-11-19 23:39:34.715984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.716279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.716290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:49.042 [2024-11-19 23:39:34.716298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:26:49.042 [2024-11-19 23:39:34.716309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.745612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.745648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:49.042 [2024-11-19 23:39:34.745658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.285 ms 00:26:49.042 [2024-11-19 23:39:34.745671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.750350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.750382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:49.042 [2024-11-19 23:39:34.750391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.636 ms 00:26:49.042 [2024-11-19 23:39:34.750400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.754241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.754273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:49.042 [2024-11-19 23:39:34.754281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:26:49.042 [2024-11-19 23:39:34.754290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.758755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.758785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:49.042 [2024-11-19 23:39:34.758794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.432 ms 00:26:49.042 [2024-11-19 23:39:34.758805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.758841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.758852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:49.042 [2024-11-19 23:39:34.758861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:49.042 [2024-11-19 23:39:34.758876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.758937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:34.758947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:49.042 [2024-11-19 23:39:34.758955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:49.042 [2024-11-19 23:39:34.758964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:34.759877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3756.199 ms, result 0 00:26:49.042 { 00:26:49.042 "name": "ftl0", 00:26:49.042 "uuid": "d249e450-9437-4263-84c5-287c10346aed" 00:26:49.042 } 00:26:49.042 23:39:34 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:49.042 23:39:34 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:49.042 23:39:34 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:49.042 23:39:34 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:49.042 [2024-11-19 23:39:35.192689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.192769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:49.042 [2024-11-19 23:39:35.192786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:49.042 [2024-11-19 23:39:35.192798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.192826] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:49.042 [2024-11-19 23:39:35.193624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.193670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:49.042 [2024-11-19 23:39:35.193682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:26:49.042 [2024-11-19 23:39:35.193697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.193989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.194013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:49.042 [2024-11-19 23:39:35.194024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:49.042 [2024-11-19 23:39:35.194039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.197515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.197540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:49.042 [2024-11-19 23:39:35.197550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:26:49.042 [2024-11-19 23:39:35.197567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.203876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.203927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:49.042 [2024-11-19 23:39:35.203940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.287 ms 00:26:49.042 [2024-11-19 23:39:35.203951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.207055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.207116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:49.042 [2024-11-19 23:39:35.207127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:26:49.042 [2024-11-19 23:39:35.207137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.212471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.212535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:49.042 [2024-11-19 23:39:35.212548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:26:49.042 [2024-11-19 23:39:35.212558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.212720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.212757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:49.042 [2024-11-19 23:39:35.212770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:26:49.042 [2024-11-19 23:39:35.212782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.215372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.215426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:49.042 [2024-11-19 23:39:35.215436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:26:49.042 [2024-11-19 23:39:35.215446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.217753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.217804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:49.042 [2024-11-19 23:39:35.217814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:26:49.042 [2024-11-19 23:39:35.217839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.220153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.220211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:49.042 [2024-11-19 23:39:35.220222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:26:49.042 [2024-11-19 23:39:35.220232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.222062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.042 [2024-11-19 23:39:35.222120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:49.042 [2024-11-19 23:39:35.222131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:26:49.042 [2024-11-19 23:39:35.222141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.042 [2024-11-19 23:39:35.222188] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:49.042 [2024-11-19 23:39:35.222207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.222996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:49.042 [2024-11-19 23:39:35.223155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:49.042 [2024-11-19 23:39:35.223163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d249e450-9437-4263-84c5-287c10346aed 00:26:49.042 [2024-11-19 23:39:35.223173] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:49.042 [2024-11-19 23:39:35.223181] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:49.042 [2024-11-19 23:39:35.223190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:49.042 [2024-11-19 23:39:35.223198] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:49.042 [2024-11-19 23:39:35.223207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:49.042 [2024-11-19 23:39:35.223219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:49.042 [2024-11-19 23:39:35.223228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:49.042 [2024-11-19 23:39:35.223235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:49.042 [2024-11-19 23:39:35.223243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:49.042 [2024-11-19 23:39:35.223251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.043 [2024-11-19 23:39:35.223261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:49.043 [2024-11-19 23:39:35.223269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:26:49.043 [2024-11-19 23:39:35.223279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.043 [2024-11-19 23:39:35.225724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.043 [2024-11-19 23:39:35.225787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:49.043 [2024-11-19 23:39:35.225797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:26:49.043 [2024-11-19 23:39:35.225813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.043 [2024-11-19 23:39:35.225938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.043 [2024-11-19 23:39:35.225955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:49.043 [2024-11-19 23:39:35.225964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:49.043 [2024-11-19 23:39:35.225974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.234584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.234634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.303 [2024-11-19 23:39:35.234645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.234658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.234726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.234753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.303 [2024-11-19 23:39:35.234762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.234772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.234836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.234851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.303 [2024-11-19 23:39:35.234859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.234869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.234890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.234901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.303 [2024-11-19 23:39:35.234909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.234919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.249176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.249231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.303 [2024-11-19 23:39:35.249243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.249258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.259898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.259949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.303 [2024-11-19 23:39:35.259960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.259970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.260044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.260059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:49.303 [2024-11-19 23:39:35.260068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.260078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.260124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.260138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.303 [2024-11-19 23:39:35.260151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.260160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.260230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.303 [2024-11-19 23:39:35.260242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.303 [2024-11-19 23:39:35.260250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.303 [2024-11-19 23:39:35.260259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.303 [2024-11-19 23:39:35.260295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.304 [2024-11-19 23:39:35.260309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:49.304 [2024-11-19 23:39:35.260317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.304 [2024-11-19 23:39:35.260330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.304 [2024-11-19 23:39:35.260373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.304 [2024-11-19 23:39:35.260394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.304 [2024-11-19 23:39:35.260403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.304 [2024-11-19 23:39:35.260413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.304 [2024-11-19 23:39:35.260460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.304 [2024-11-19 23:39:35.260475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.304 [2024-11-19 23:39:35.260483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.304 [2024-11-19 23:39:35.260493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.304 [2024-11-19 23:39:35.260629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.912 ms, result 0 00:26:49.304 true 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92224 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92224 ']' 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92224 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92224 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.304 killing process with pid 92224 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92224' 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 92224 00:26:49.304 23:39:35 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 92224 00:26:54.597 23:39:39 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:58.807 262144+0 records in 00:26:58.807 262144+0 records out 00:26:58.807 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.50444 s, 238 MB/s 00:26:58.807 23:39:44 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:00.724 23:39:46 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:00.724 [2024-11-19 23:39:46.666290] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:27:00.724 [2024-11-19 23:39:46.666389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92443 ] 00:27:00.724 [2024-11-19 23:39:46.819073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.724 [2024-11-19 23:39:46.845329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.987 [2024-11-19 23:39:46.961706] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:00.987 [2024-11-19 23:39:46.961805] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:00.987 [2024-11-19 23:39:47.126267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.126335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:00.987 [2024-11-19 23:39:47.126353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:00.987 [2024-11-19 23:39:47.126362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.126430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.126442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:00.987 [2024-11-19 23:39:47.126455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:00.987 [2024-11-19 23:39:47.126464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.126490] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:00.987 [2024-11-19 23:39:47.126803] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:00.987 [2024-11-19 23:39:47.126828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.126838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:00.987 [2024-11-19 23:39:47.126851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:27:00.987 [2024-11-19 23:39:47.126862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.129077] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:00.987 [2024-11-19 23:39:47.134162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.134214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:00.987 [2024-11-19 23:39:47.134226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.087 ms 00:27:00.987 [2024-11-19 23:39:47.134243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.134327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.134341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:00.987 [2024-11-19 23:39:47.134350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:00.987 [2024-11-19 23:39:47.134358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.145775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.145815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:00.987 [2024-11-19 23:39:47.145832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.371 ms 00:27:00.987 [2024-11-19 23:39:47.145840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.145959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.145970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:00.987 [2024-11-19 23:39:47.145983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:27:00.987 [2024-11-19 23:39:47.145996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.146056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.987 [2024-11-19 23:39:47.146072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:00.987 [2024-11-19 23:39:47.146082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:00.987 [2024-11-19 23:39:47.146090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.987 [2024-11-19 23:39:47.146121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:00.988 [2024-11-19 23:39:47.148836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.988 [2024-11-19 23:39:47.148876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:00.988 [2024-11-19 23:39:47.148886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.721 ms 00:27:00.988 [2024-11-19 23:39:47.148895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.988 [2024-11-19 23:39:47.148961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.988 [2024-11-19 23:39:47.148972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:00.988 [2024-11-19 23:39:47.148983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:00.988 [2024-11-19 23:39:47.148994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.988 [2024-11-19 23:39:47.149027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:00.988 [2024-11-19 23:39:47.149056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:00.988 [2024-11-19 23:39:47.149097] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:00.988 [2024-11-19 23:39:47.149120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:00.988 [2024-11-19 23:39:47.149233] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:00.988 [2024-11-19 23:39:47.149246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:00.988 [2024-11-19 23:39:47.149257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:00.988 [2024-11-19 23:39:47.149273] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149283] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149292] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:00.988 [2024-11-19 23:39:47.149301] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:00.988 [2024-11-19 23:39:47.149309] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:00.988 [2024-11-19 23:39:47.149319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:00.988 [2024-11-19 23:39:47.149329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.988 [2024-11-19 23:39:47.149336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:00.988 [2024-11-19 23:39:47.149345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:27:00.988 [2024-11-19 23:39:47.149361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.988 [2024-11-19 23:39:47.149445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.988 [2024-11-19 23:39:47.149458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:00.988 [2024-11-19 23:39:47.149467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:00.988 [2024-11-19 23:39:47.149476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.988 [2024-11-19 23:39:47.149582] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:00.988 [2024-11-19 23:39:47.149600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:00.988 [2024-11-19 23:39:47.149610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:00.988 [2024-11-19 23:39:47.149650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:00.988 [2024-11-19 23:39:47.149677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:00.988 [2024-11-19 23:39:47.149698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:00.988 [2024-11-19 23:39:47.149707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:00.988 [2024-11-19 23:39:47.149715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:00.988 [2024-11-19 23:39:47.149724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:00.988 [2024-11-19 23:39:47.149756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:00.988 [2024-11-19 23:39:47.149766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:00.988 [2024-11-19 23:39:47.149789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:00.988 [2024-11-19 23:39:47.149816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:00.988 [2024-11-19 23:39:47.149841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:00.988 [2024-11-19 23:39:47.149878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:00.988 [2024-11-19 23:39:47.149900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:00.988 [2024-11-19 23:39:47.149917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:00.988 [2024-11-19 23:39:47.149924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:00.988 [2024-11-19 23:39:47.149938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:00.988 [2024-11-19 23:39:47.149944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:00.988 [2024-11-19 23:39:47.149951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:00.988 [2024-11-19 23:39:47.149958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:00.988 [2024-11-19 23:39:47.149966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:00.988 [2024-11-19 23:39:47.149974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.149981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:00.988 [2024-11-19 23:39:47.149988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:00.988 [2024-11-19 23:39:47.150000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.150007] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:00.988 [2024-11-19 23:39:47.150020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:00.988 [2024-11-19 23:39:47.150031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:00.988 [2024-11-19 23:39:47.150040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:00.988 [2024-11-19 23:39:47.150048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:00.988 [2024-11-19 23:39:47.150055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:00.988 [2024-11-19 23:39:47.150064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:00.988 [2024-11-19 23:39:47.150073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:00.988 [2024-11-19 23:39:47.150090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:00.988 [2024-11-19 23:39:47.150099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:00.988 [2024-11-19 23:39:47.150108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:00.988 [2024-11-19 23:39:47.150119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.988 [2024-11-19 23:39:47.150128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:00.988 [2024-11-19 23:39:47.150136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:00.988 [2024-11-19 23:39:47.150144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:00.988 [2024-11-19 23:39:47.150155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:00.988 [2024-11-19 23:39:47.150163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:00.988 [2024-11-19 23:39:47.150171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:00.988 [2024-11-19 23:39:47.150178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:00.988 [2024-11-19 23:39:47.150185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:00.988 [2024-11-19 23:39:47.150196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:00.988 [2024-11-19 23:39:47.150203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:00.988 [2024-11-19 23:39:47.150211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:00.988 [2024-11-19 23:39:47.150220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:00.988 [2024-11-19 23:39:47.150230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:00.989 [2024-11-19 23:39:47.150237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:00.989 [2024-11-19 23:39:47.150245] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:00.989 [2024-11-19 23:39:47.150255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.989 [2024-11-19 23:39:47.150263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:00.989 [2024-11-19 23:39:47.150271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:00.989 [2024-11-19 23:39:47.150278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:00.989 [2024-11-19 23:39:47.150289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:00.989 [2024-11-19 23:39:47.150297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.989 [2024-11-19 23:39:47.150307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:00.989 [2024-11-19 23:39:47.150315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:27:00.989 [2024-11-19 23:39:47.150322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.989 [2024-11-19 23:39:47.166890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.989 [2024-11-19 23:39:47.166922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:00.989 [2024-11-19 23:39:47.166934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.495 ms 00:27:00.989 [2024-11-19 23:39:47.166942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.989 [2024-11-19 23:39:47.167034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.989 [2024-11-19 23:39:47.167042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:00.989 [2024-11-19 23:39:47.167051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:00.989 [2024-11-19 23:39:47.167058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.190475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.190538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.252 [2024-11-19 23:39:47.190561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:27:01.252 [2024-11-19 23:39:47.190575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.190652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.190670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.252 [2024-11-19 23:39:47.190687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:01.252 [2024-11-19 23:39:47.190707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.191439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.191648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.252 [2024-11-19 23:39:47.191675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:27:01.252 [2024-11-19 23:39:47.191691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.191979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.192005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.252 [2024-11-19 23:39:47.192020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:27:01.252 [2024-11-19 23:39:47.192036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.199875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.199909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.252 [2024-11-19 23:39:47.199925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.808 ms 00:27:01.252 [2024-11-19 23:39:47.199933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.203155] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:01.252 [2024-11-19 23:39:47.203191] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:01.252 [2024-11-19 23:39:47.203202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.203211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:01.252 [2024-11-19 23:39:47.203219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.188 ms 00:27:01.252 [2024-11-19 23:39:47.203226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.217962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.218091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:01.252 [2024-11-19 23:39:47.218113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.692 ms 00:27:01.252 [2024-11-19 23:39:47.218121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.220526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.220559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:01.252 [2024-11-19 23:39:47.220569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:27:01.252 [2024-11-19 23:39:47.220576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.222577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.222608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:01.252 [2024-11-19 23:39:47.222618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:27:01.252 [2024-11-19 23:39:47.222625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.222977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.222994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:01.252 [2024-11-19 23:39:47.223006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:27:01.252 [2024-11-19 23:39:47.223015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.243554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.243605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:01.252 [2024-11-19 23:39:47.243617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.523 ms 00:27:01.252 [2024-11-19 23:39:47.243625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.251283] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:01.252 [2024-11-19 23:39:47.254230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.254261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:01.252 [2024-11-19 23:39:47.254277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.563 ms 00:27:01.252 [2024-11-19 23:39:47.254288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.254370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.254385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:01.252 [2024-11-19 23:39:47.254394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:01.252 [2024-11-19 23:39:47.254401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.254465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.254476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:01.252 [2024-11-19 23:39:47.254489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:01.252 [2024-11-19 23:39:47.254503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.254524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.254533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:01.252 [2024-11-19 23:39:47.254541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:01.252 [2024-11-19 23:39:47.254548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.254586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:01.252 [2024-11-19 23:39:47.254598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.254606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:01.252 [2024-11-19 23:39:47.254614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:01.252 [2024-11-19 23:39:47.254622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.258988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.259022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:01.252 [2024-11-19 23:39:47.259033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:27:01.252 [2024-11-19 23:39:47.259041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.252 [2024-11-19 23:39:47.259115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.252 [2024-11-19 23:39:47.259125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:01.253 [2024-11-19 23:39:47.259137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:01.253 [2024-11-19 23:39:47.259145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.253 [2024-11-19 23:39:47.260200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.446 ms, result 0 00:27:02.196  [2024-11-19T23:39:49.323Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T23:39:50.711Z] Copying: 40/1024 [MB] (27 MBps) [2024-11-19T23:39:51.281Z] Copying: 60/1024 [MB] (20 MBps) [2024-11-19T23:39:52.667Z] Copying: 76/1024 [MB] (15 MBps) [2024-11-19T23:39:53.612Z] Copying: 87/1024 [MB] (11 MBps) [2024-11-19T23:39:54.555Z] Copying: 100/1024 [MB] (12 MBps) [2024-11-19T23:39:55.500Z] Copying: 115/1024 [MB] (15 MBps) [2024-11-19T23:39:56.443Z] Copying: 135/1024 [MB] (20 MBps) [2024-11-19T23:39:57.388Z] Copying: 150/1024 [MB] (14 MBps) [2024-11-19T23:39:58.448Z] Copying: 166/1024 [MB] (16 MBps) [2024-11-19T23:39:59.381Z] Copying: 192/1024 [MB] (25 MBps) [2024-11-19T23:40:00.326Z] Copying: 204/1024 [MB] (12 MBps) [2024-11-19T23:40:01.710Z] Copying: 219660/1048576 [kB] (10052 kBps) [2024-11-19T23:40:02.275Z] Copying: 224/1024 [MB] (10 MBps) [2024-11-19T23:40:03.651Z] Copying: 237/1024 [MB] (12 MBps) [2024-11-19T23:40:04.595Z] Copying: 250/1024 [MB] (13 MBps) [2024-11-19T23:40:05.537Z] Copying: 261/1024 [MB] (11 MBps) [2024-11-19T23:40:06.472Z] Copying: 272/1024 [MB] (10 MBps) [2024-11-19T23:40:07.408Z] Copying: 285/1024 [MB] (12 MBps) [2024-11-19T23:40:08.343Z] Copying: 298/1024 [MB] (13 MBps) [2024-11-19T23:40:09.278Z] Copying: 312/1024 [MB] (13 MBps) [2024-11-19T23:40:10.653Z] Copying: 325/1024 [MB] (13 MBps) [2024-11-19T23:40:11.594Z] Copying: 338/1024 [MB] (12 MBps) [2024-11-19T23:40:12.529Z] Copying: 349/1024 [MB] (11 MBps) [2024-11-19T23:40:13.464Z] Copying: 361/1024 [MB] (12 MBps) [2024-11-19T23:40:14.405Z] Copying: 374/1024 [MB] (12 MBps) [2024-11-19T23:40:15.345Z] Copying: 387/1024 [MB] (12 MBps) [2024-11-19T23:40:16.282Z] Copying: 400/1024 [MB] (12 MBps) [2024-11-19T23:40:17.665Z] Copying: 412/1024 [MB] (12 MBps) [2024-11-19T23:40:18.603Z] Copying: 425/1024 [MB] (12 MBps) [2024-11-19T23:40:19.537Z] Copying: 437/1024 [MB] (12 MBps) [2024-11-19T23:40:20.472Z] Copying: 450/1024 [MB] (12 MBps) [2024-11-19T23:40:21.415Z] Copying: 462/1024 [MB] (12 MBps) [2024-11-19T23:40:22.352Z] Copying: 473/1024 [MB] (10 MBps) [2024-11-19T23:40:23.297Z] Copying: 486/1024 [MB] (12 MBps) [2024-11-19T23:40:24.686Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-19T23:40:25.624Z] Copying: 509/1024 [MB] (12 MBps) [2024-11-19T23:40:26.569Z] Copying: 520/1024 [MB] (11 MBps) [2024-11-19T23:40:27.512Z] Copying: 563/1024 [MB] (43 MBps) [2024-11-19T23:40:28.451Z] Copying: 579/1024 [MB] (15 MBps) [2024-11-19T23:40:29.390Z] Copying: 591/1024 [MB] (12 MBps) [2024-11-19T23:40:30.450Z] Copying: 605/1024 [MB] (13 MBps) [2024-11-19T23:40:31.403Z] Copying: 623/1024 [MB] (18 MBps) [2024-11-19T23:40:32.339Z] Copying: 644/1024 [MB] (21 MBps) [2024-11-19T23:40:33.279Z] Copying: 668/1024 [MB] (24 MBps) [2024-11-19T23:40:34.664Z] Copying: 693/1024 [MB] (24 MBps) [2024-11-19T23:40:35.601Z] Copying: 716/1024 [MB] (23 MBps) [2024-11-19T23:40:36.545Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-19T23:40:37.488Z] Copying: 764/1024 [MB] (24 MBps) [2024-11-19T23:40:38.431Z] Copying: 780/1024 [MB] (15 MBps) [2024-11-19T23:40:39.367Z] Copying: 791/1024 [MB] (11 MBps) [2024-11-19T23:40:40.308Z] Copying: 803/1024 [MB] (12 MBps) [2024-11-19T23:40:41.684Z] Copying: 816/1024 [MB] (12 MBps) [2024-11-19T23:40:42.624Z] Copying: 829/1024 [MB] (13 MBps) [2024-11-19T23:40:43.566Z] Copying: 841/1024 [MB] (11 MBps) [2024-11-19T23:40:44.508Z] Copying: 858/1024 [MB] (16 MBps) [2024-11-19T23:40:45.450Z] Copying: 892/1024 [MB] (34 MBps) [2024-11-19T23:40:46.395Z] Copying: 932/1024 [MB] (39 MBps) [2024-11-19T23:40:47.337Z] Copying: 962/1024 [MB] (30 MBps) [2024-11-19T23:40:48.279Z] Copying: 980/1024 [MB] (17 MBps) [2024-11-19T23:40:49.661Z] Copying: 997/1024 [MB] (17 MBps) [2024-11-19T23:40:49.924Z] Copying: 1016/1024 [MB] (19 MBps) [2024-11-19T23:40:49.924Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 23:40:49.666132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.732 [2024-11-19 23:40:49.666198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:03.732 [2024-11-19 23:40:49.666213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:03.732 [2024-11-19 23:40:49.666223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.732 [2024-11-19 23:40:49.666253] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:03.732 [2024-11-19 23:40:49.667060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.732 [2024-11-19 23:40:49.667087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:03.732 [2024-11-19 23:40:49.667099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:28:03.732 [2024-11-19 23:40:49.667108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.732 [2024-11-19 23:40:49.670298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.732 [2024-11-19 23:40:49.670349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:03.732 [2024-11-19 23:40:49.670361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:28:03.732 [2024-11-19 23:40:49.670368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.732 [2024-11-19 23:40:49.670402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.732 [2024-11-19 23:40:49.670418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:03.732 [2024-11-19 23:40:49.670426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:03.732 [2024-11-19 23:40:49.670435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.732 [2024-11-19 23:40:49.670495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.732 [2024-11-19 23:40:49.670505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:03.732 [2024-11-19 23:40:49.670514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:03.732 [2024-11-19 23:40:49.670522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.732 [2024-11-19 23:40:49.670535] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:03.732 [2024-11-19 23:40:49.670548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.670995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:03.732 [2024-11-19 23:40:49.671053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:03.733 [2024-11-19 23:40:49.671472] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:03.733 [2024-11-19 23:40:49.671480] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d249e450-9437-4263-84c5-287c10346aed 00:28:03.733 [2024-11-19 23:40:49.671488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:03.733 [2024-11-19 23:40:49.671496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:03.733 [2024-11-19 23:40:49.671503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:03.733 [2024-11-19 23:40:49.671511] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:03.733 [2024-11-19 23:40:49.671518] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:03.733 [2024-11-19 23:40:49.671526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:03.733 [2024-11-19 23:40:49.671534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:03.733 [2024-11-19 23:40:49.671541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:03.733 [2024-11-19 23:40:49.671547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:03.733 [2024-11-19 23:40:49.671556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.733 [2024-11-19 23:40:49.671565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:03.733 [2024-11-19 23:40:49.671574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:28:03.733 [2024-11-19 23:40:49.671584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.674025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.733 [2024-11-19 23:40:49.674058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:03.733 [2024-11-19 23:40:49.674068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.426 ms 00:28:03.733 [2024-11-19 23:40:49.674077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.674210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.733 [2024-11-19 23:40:49.674219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:03.733 [2024-11-19 23:40:49.674232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:03.733 [2024-11-19 23:40:49.674245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.681766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.733 [2024-11-19 23:40:49.681811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:03.733 [2024-11-19 23:40:49.681822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.733 [2024-11-19 23:40:49.681829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.681892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.733 [2024-11-19 23:40:49.681900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:03.733 [2024-11-19 23:40:49.681913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.733 [2024-11-19 23:40:49.681926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.681961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.733 [2024-11-19 23:40:49.681970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:03.733 [2024-11-19 23:40:49.681978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.733 [2024-11-19 23:40:49.681986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.682004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.733 [2024-11-19 23:40:49.682011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:03.733 [2024-11-19 23:40:49.682019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.733 [2024-11-19 23:40:49.682030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.733 [2024-11-19 23:40:49.695127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.733 [2024-11-19 23:40:49.695189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.733 [2024-11-19 23:40:49.695200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.733 [2024-11-19 23:40:49.695208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.734 [2024-11-19 23:40:49.705134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.734 [2024-11-19 23:40:49.705212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.734 [2024-11-19 23:40:49.705270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.734 [2024-11-19 23:40:49.705355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:03.734 [2024-11-19 23:40:49.705405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.734 [2024-11-19 23:40:49.705470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:03.734 [2024-11-19 23:40:49.705531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.734 [2024-11-19 23:40:49.705540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:03.734 [2024-11-19 23:40:49.705547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.734 [2024-11-19 23:40:49.705677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.512 ms, result 0 00:28:03.995 00:28:03.995 00:28:03.995 23:40:50 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:04.257 [2024-11-19 23:40:50.212954] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:28:04.257 [2024-11-19 23:40:50.213107] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93088 ] 00:28:04.257 [2024-11-19 23:40:50.377252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.257 [2024-11-19 23:40:50.405591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.519 [2024-11-19 23:40:50.516440] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.519 [2024-11-19 23:40:50.516516] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:04.519 [2024-11-19 23:40:50.678107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.678173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:04.519 [2024-11-19 23:40:50.678189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:04.519 [2024-11-19 23:40:50.678202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.678263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.678279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.519 [2024-11-19 23:40:50.678287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:04.519 [2024-11-19 23:40:50.678297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.678325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:04.519 [2024-11-19 23:40:50.678607] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:04.519 [2024-11-19 23:40:50.678626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.678635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.519 [2024-11-19 23:40:50.678645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:28:04.519 [2024-11-19 23:40:50.678655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.678963] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:04.519 [2024-11-19 23:40:50.678991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.679000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:04.519 [2024-11-19 23:40:50.679009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:04.519 [2024-11-19 23:40:50.679017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.679076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.679088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:04.519 [2024-11-19 23:40:50.679097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:04.519 [2024-11-19 23:40:50.679106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.679364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.679387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.519 [2024-11-19 23:40:50.679396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:28:04.519 [2024-11-19 23:40:50.679404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.679491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.679501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.519 [2024-11-19 23:40:50.679509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:04.519 [2024-11-19 23:40:50.679516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.679586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.679596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:04.519 [2024-11-19 23:40:50.679604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:04.519 [2024-11-19 23:40:50.679611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.679638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:04.519 [2024-11-19 23:40:50.681821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.681860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.519 [2024-11-19 23:40:50.681871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:28:04.519 [2024-11-19 23:40:50.681879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.681921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.681929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:04.519 [2024-11-19 23:40:50.681938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:04.519 [2024-11-19 23:40:50.681950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.682001] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:04.519 [2024-11-19 23:40:50.682029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:04.519 [2024-11-19 23:40:50.682069] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:04.519 [2024-11-19 23:40:50.682089] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:04.519 [2024-11-19 23:40:50.682196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:04.519 [2024-11-19 23:40:50.682207] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:04.519 [2024-11-19 23:40:50.682222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:04.519 [2024-11-19 23:40:50.682236] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:04.519 [2024-11-19 23:40:50.682248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:04.519 [2024-11-19 23:40:50.682262] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:04.519 [2024-11-19 23:40:50.682270] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:04.519 [2024-11-19 23:40:50.682277] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:04.519 [2024-11-19 23:40:50.682285] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:04.519 [2024-11-19 23:40:50.682296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.682303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:04.519 [2024-11-19 23:40:50.682317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:04.519 [2024-11-19 23:40:50.682323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.682405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.519 [2024-11-19 23:40:50.682414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:04.519 [2024-11-19 23:40:50.682428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:04.519 [2024-11-19 23:40:50.682434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.519 [2024-11-19 23:40:50.682542] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:04.519 [2024-11-19 23:40:50.682555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:04.519 [2024-11-19 23:40:50.682570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.519 [2024-11-19 23:40:50.682582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:04.519 [2024-11-19 23:40:50.682598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:04.519 [2024-11-19 23:40:50.682622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:04.519 [2024-11-19 23:40:50.682630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.519 [2024-11-19 23:40:50.682646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:04.519 [2024-11-19 23:40:50.682654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:04.519 [2024-11-19 23:40:50.682664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.519 [2024-11-19 23:40:50.682672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:04.519 [2024-11-19 23:40:50.682680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:04.519 [2024-11-19 23:40:50.682687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:04.519 [2024-11-19 23:40:50.682703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:04.519 [2024-11-19 23:40:50.682713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:04.519 [2024-11-19 23:40:50.682744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.519 [2024-11-19 23:40:50.682761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:04.519 [2024-11-19 23:40:50.682768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:04.519 [2024-11-19 23:40:50.682777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.520 [2024-11-19 23:40:50.682785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:04.520 [2024-11-19 23:40:50.682793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:04.520 [2024-11-19 23:40:50.682801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.520 [2024-11-19 23:40:50.682809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:04.520 [2024-11-19 23:40:50.682816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:04.520 [2024-11-19 23:40:50.682824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.520 [2024-11-19 23:40:50.682832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:04.520 [2024-11-19 23:40:50.682839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:04.520 [2024-11-19 23:40:50.682846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.520 [2024-11-19 23:40:50.682858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:04.520 [2024-11-19 23:40:50.682866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:04.520 [2024-11-19 23:40:50.682872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.520 [2024-11-19 23:40:50.682879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:04.520 [2024-11-19 23:40:50.682885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:04.520 [2024-11-19 23:40:50.682892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.520 [2024-11-19 23:40:50.682898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:04.520 [2024-11-19 23:40:50.682905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:04.520 [2024-11-19 23:40:50.682912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.520 [2024-11-19 23:40:50.682919] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:04.520 [2024-11-19 23:40:50.682929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:04.520 [2024-11-19 23:40:50.682937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.520 [2024-11-19 23:40:50.682947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.520 [2024-11-19 23:40:50.682957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:04.520 [2024-11-19 23:40:50.682964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:04.520 [2024-11-19 23:40:50.682970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:04.520 [2024-11-19 23:40:50.682980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:04.520 [2024-11-19 23:40:50.682986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:04.520 [2024-11-19 23:40:50.682993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:04.520 [2024-11-19 23:40:50.683001] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:04.520 [2024-11-19 23:40:50.683011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:04.520 [2024-11-19 23:40:50.683027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:04.520 [2024-11-19 23:40:50.683034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:04.520 [2024-11-19 23:40:50.683041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:04.520 [2024-11-19 23:40:50.683049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:04.520 [2024-11-19 23:40:50.683056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:04.520 [2024-11-19 23:40:50.683063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:04.520 [2024-11-19 23:40:50.683069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:04.520 [2024-11-19 23:40:50.683076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:04.520 [2024-11-19 23:40:50.683083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:04.520 [2024-11-19 23:40:50.683120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:04.520 [2024-11-19 23:40:50.683128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.520 [2024-11-19 23:40:50.683144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:04.520 [2024-11-19 23:40:50.683151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:04.520 [2024-11-19 23:40:50.683158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:04.520 [2024-11-19 23:40:50.683165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.520 [2024-11-19 23:40:50.683175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:04.520 [2024-11-19 23:40:50.683182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:28:04.520 [2024-11-19 23:40:50.683190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.520 [2024-11-19 23:40:50.693082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.520 [2024-11-19 23:40:50.693288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.520 [2024-11-19 23:40:50.693307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.848 ms 00:28:04.520 [2024-11-19 23:40:50.693315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.520 [2024-11-19 23:40:50.693404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.520 [2024-11-19 23:40:50.693413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:04.520 [2024-11-19 23:40:50.693422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:04.520 [2024-11-19 23:40:50.693430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.713666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.713761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.782 [2024-11-19 23:40:50.713778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.171 ms 00:28:04.782 [2024-11-19 23:40:50.713788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.713842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.713863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.782 [2024-11-19 23:40:50.713874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:04.782 [2024-11-19 23:40:50.713884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.714011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.714030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.782 [2024-11-19 23:40:50.714045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:04.782 [2024-11-19 23:40:50.714054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.714202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.714235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.782 [2024-11-19 23:40:50.714246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:04.782 [2024-11-19 23:40:50.714256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.722484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.722535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.782 [2024-11-19 23:40:50.722546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.204 ms 00:28:04.782 [2024-11-19 23:40:50.722566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.722688] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:04.782 [2024-11-19 23:40:50.722702] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:04.782 [2024-11-19 23:40:50.722716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.722753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:04.782 [2024-11-19 23:40:50.722763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:04.782 [2024-11-19 23:40:50.722770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.735085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.735142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:04.782 [2024-11-19 23:40:50.735159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.292 ms 00:28:04.782 [2024-11-19 23:40:50.735168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.735297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.735307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:04.782 [2024-11-19 23:40:50.735316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:04.782 [2024-11-19 23:40:50.735324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.735377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.735387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:04.782 [2024-11-19 23:40:50.735399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:04.782 [2024-11-19 23:40:50.735406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.735722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.735760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:04.782 [2024-11-19 23:40:50.735786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:28:04.782 [2024-11-19 23:40:50.735794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.735817] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:04.782 [2024-11-19 23:40:50.735827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.735840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:04.782 [2024-11-19 23:40:50.735850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:04.782 [2024-11-19 23:40:50.735858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.745271] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:04.782 [2024-11-19 23:40:50.745424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.745434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:04.782 [2024-11-19 23:40:50.745445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.547 ms 00:28:04.782 [2024-11-19 23:40:50.745461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.748029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.748219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:04.782 [2024-11-19 23:40:50.748239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.540 ms 00:28:04.782 [2024-11-19 23:40:50.748252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.748359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.748369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:04.782 [2024-11-19 23:40:50.748378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:04.782 [2024-11-19 23:40:50.748386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.748413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.748422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:04.782 [2024-11-19 23:40:50.748430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:04.782 [2024-11-19 23:40:50.748437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.748474] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:04.782 [2024-11-19 23:40:50.748488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.748495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:04.782 [2024-11-19 23:40:50.748503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:04.782 [2024-11-19 23:40:50.748511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.754865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.755056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:04.782 [2024-11-19 23:40:50.755075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.333 ms 00:28:04.782 [2024-11-19 23:40:50.755083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.755165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.782 [2024-11-19 23:40:50.755180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:04.782 [2024-11-19 23:40:50.755189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:04.782 [2024-11-19 23:40:50.755200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.782 [2024-11-19 23:40:50.756483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.925 ms, result 0 00:28:06.167  [2024-11-19T23:40:53.303Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-19T23:40:54.248Z] Copying: 24/1024 [MB] (12 MBps) [2024-11-19T23:40:55.191Z] Copying: 45/1024 [MB] (20 MBps) [2024-11-19T23:40:56.135Z] Copying: 68/1024 [MB] (22 MBps) [2024-11-19T23:40:57.082Z] Copying: 86/1024 [MB] (17 MBps) [2024-11-19T23:40:58.025Z] Copying: 100/1024 [MB] (14 MBps) [2024-11-19T23:40:58.968Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-19T23:41:00.354Z] Copying: 122/1024 [MB] (10 MBps) [2024-11-19T23:41:00.969Z] Copying: 137/1024 [MB] (15 MBps) [2024-11-19T23:41:01.945Z] Copying: 161/1024 [MB] (23 MBps) [2024-11-19T23:41:03.332Z] Copying: 184/1024 [MB] (23 MBps) [2024-11-19T23:41:04.275Z] Copying: 202/1024 [MB] (17 MBps) [2024-11-19T23:41:05.221Z] Copying: 221/1024 [MB] (19 MBps) [2024-11-19T23:41:06.164Z] Copying: 240/1024 [MB] (18 MBps) [2024-11-19T23:41:07.108Z] Copying: 259/1024 [MB] (19 MBps) [2024-11-19T23:41:08.051Z] Copying: 279/1024 [MB] (19 MBps) [2024-11-19T23:41:08.992Z] Copying: 300/1024 [MB] (21 MBps) [2024-11-19T23:41:10.377Z] Copying: 319/1024 [MB] (18 MBps) [2024-11-19T23:41:10.950Z] Copying: 335/1024 [MB] (15 MBps) [2024-11-19T23:41:12.338Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-19T23:41:13.281Z] Copying: 359/1024 [MB] (12 MBps) [2024-11-19T23:41:14.226Z] Copying: 371/1024 [MB] (12 MBps) [2024-11-19T23:41:15.169Z] Copying: 388/1024 [MB] (16 MBps) [2024-11-19T23:41:16.112Z] Copying: 401/1024 [MB] (13 MBps) [2024-11-19T23:41:17.057Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-19T23:41:17.999Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-19T23:41:19.382Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-19T23:41:19.953Z] Copying: 455/1024 [MB] (22 MBps) [2024-11-19T23:41:21.340Z] Copying: 476/1024 [MB] (20 MBps) [2024-11-19T23:41:22.284Z] Copying: 489/1024 [MB] (13 MBps) [2024-11-19T23:41:23.228Z] Copying: 500/1024 [MB] (11 MBps) [2024-11-19T23:41:24.170Z] Copying: 527/1024 [MB] (27 MBps) [2024-11-19T23:41:25.113Z] Copying: 544/1024 [MB] (16 MBps) [2024-11-19T23:41:26.059Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-19T23:41:27.008Z] Copying: 574/1024 [MB] (15 MBps) [2024-11-19T23:41:27.953Z] Copying: 588/1024 [MB] (14 MBps) [2024-11-19T23:41:29.351Z] Copying: 608/1024 [MB] (20 MBps) [2024-11-19T23:41:30.295Z] Copying: 630/1024 [MB] (21 MBps) [2024-11-19T23:41:31.240Z] Copying: 649/1024 [MB] (18 MBps) [2024-11-19T23:41:32.189Z] Copying: 668/1024 [MB] (19 MBps) [2024-11-19T23:41:33.221Z] Copying: 682/1024 [MB] (13 MBps) [2024-11-19T23:41:34.163Z] Copying: 698/1024 [MB] (15 MBps) [2024-11-19T23:41:35.107Z] Copying: 722/1024 [MB] (24 MBps) [2024-11-19T23:41:36.053Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-19T23:41:36.997Z] Copying: 744/1024 [MB] (10 MBps) [2024-11-19T23:41:38.383Z] Copying: 759/1024 [MB] (15 MBps) [2024-11-19T23:41:38.955Z] Copying: 776/1024 [MB] (16 MBps) [2024-11-19T23:41:40.342Z] Copying: 799/1024 [MB] (22 MBps) [2024-11-19T23:41:41.287Z] Copying: 817/1024 [MB] (18 MBps) [2024-11-19T23:41:42.246Z] Copying: 837/1024 [MB] (19 MBps) [2024-11-19T23:41:43.196Z] Copying: 857/1024 [MB] (20 MBps) [2024-11-19T23:41:44.143Z] Copying: 872/1024 [MB] (15 MBps) [2024-11-19T23:41:45.088Z] Copying: 891/1024 [MB] (18 MBps) [2024-11-19T23:41:46.032Z] Copying: 909/1024 [MB] (17 MBps) [2024-11-19T23:41:46.976Z] Copying: 935/1024 [MB] (26 MBps) [2024-11-19T23:41:48.362Z] Copying: 951/1024 [MB] (16 MBps) [2024-11-19T23:41:49.306Z] Copying: 970/1024 [MB] (19 MBps) [2024-11-19T23:41:50.248Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-19T23:41:51.193Z] Copying: 995/1024 [MB] (13 MBps) [2024-11-19T23:41:51.454Z] Copying: 1018/1024 [MB] (22 MBps) [2024-11-19T23:41:51.718Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 23:41:51.632817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.526 [2024-11-19 23:41:51.632908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:05.526 [2024-11-19 23:41:51.632929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:05.526 [2024-11-19 23:41:51.632941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.526 [2024-11-19 23:41:51.633151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:05.526 [2024-11-19 23:41:51.634113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.526 [2024-11-19 23:41:51.634145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:05.526 [2024-11-19 23:41:51.634161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:29:05.526 [2024-11-19 23:41:51.634174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.526 [2024-11-19 23:41:51.634475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.526 [2024-11-19 23:41:51.634490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:05.526 [2024-11-19 23:41:51.634501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:29:05.526 [2024-11-19 23:41:51.634512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.526 [2024-11-19 23:41:51.634549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.526 [2024-11-19 23:41:51.634565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:05.526 [2024-11-19 23:41:51.634583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:05.527 [2024-11-19 23:41:51.634593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.527 [2024-11-19 23:41:51.634665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.527 [2024-11-19 23:41:51.634677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:05.527 [2024-11-19 23:41:51.634690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:05.527 [2024-11-19 23:41:51.634705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.527 [2024-11-19 23:41:51.634723] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:05.527 [2024-11-19 23:41:51.634765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.634998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:05.527 [2024-11-19 23:41:51.635673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.635995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.636004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.636011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.636019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:05.528 [2024-11-19 23:41:51.636034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:05.528 [2024-11-19 23:41:51.636047] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d249e450-9437-4263-84c5-287c10346aed 00:29:05.528 [2024-11-19 23:41:51.636059] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:05.528 [2024-11-19 23:41:51.636067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:05.528 [2024-11-19 23:41:51.636074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:05.528 [2024-11-19 23:41:51.636083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:05.528 [2024-11-19 23:41:51.636098] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:05.528 [2024-11-19 23:41:51.636106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:05.528 [2024-11-19 23:41:51.636114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:05.528 [2024-11-19 23:41:51.636120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:05.528 [2024-11-19 23:41:51.636127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:05.528 [2024-11-19 23:41:51.636135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.528 [2024-11-19 23:41:51.636143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:05.528 [2024-11-19 23:41:51.636151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:29:05.528 [2024-11-19 23:41:51.636158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.638759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.528 [2024-11-19 23:41:51.638791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:05.528 [2024-11-19 23:41:51.638802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:29:05.528 [2024-11-19 23:41:51.638811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.638921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.528 [2024-11-19 23:41:51.638929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:05.528 [2024-11-19 23:41:51.638946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:05.528 [2024-11-19 23:41:51.638957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.646649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.646910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.528 [2024-11-19 23:41:51.646933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.646941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.647015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.647024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.528 [2024-11-19 23:41:51.647041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.647052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.647116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.647126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.528 [2024-11-19 23:41:51.647136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.647144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.647161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.647169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.528 [2024-11-19 23:41:51.647182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.647190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.660313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.660366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:05.528 [2024-11-19 23:41:51.660378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.660386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.671043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.671093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:05.528 [2024-11-19 23:41:51.671104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.671124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.671171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.671181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:05.528 [2024-11-19 23:41:51.671190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.671198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.528 [2024-11-19 23:41:51.671233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.528 [2024-11-19 23:41:51.671242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:05.528 [2024-11-19 23:41:51.671250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.528 [2024-11-19 23:41:51.671258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.529 [2024-11-19 23:41:51.671311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.529 [2024-11-19 23:41:51.671328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:05.529 [2024-11-19 23:41:51.671336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.529 [2024-11-19 23:41:51.671343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.529 [2024-11-19 23:41:51.671366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.529 [2024-11-19 23:41:51.671376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:05.529 [2024-11-19 23:41:51.671384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.529 [2024-11-19 23:41:51.671392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.529 [2024-11-19 23:41:51.671432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.529 [2024-11-19 23:41:51.671441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:05.529 [2024-11-19 23:41:51.671450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.529 [2024-11-19 23:41:51.671458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.529 [2024-11-19 23:41:51.671499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.529 [2024-11-19 23:41:51.671509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:05.529 [2024-11-19 23:41:51.671517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.529 [2024-11-19 23:41:51.671525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.529 [2024-11-19 23:41:51.671654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.807 ms, result 0 00:29:05.790 00:29:05.790 00:29:05.790 23:41:51 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:08.337 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:08.337 23:41:54 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:08.337 [2024-11-19 23:41:54.159148] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:29:08.337 [2024-11-19 23:41:54.159255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93724 ] 00:29:08.337 [2024-11-19 23:41:54.316670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.337 [2024-11-19 23:41:54.338360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.337 [2024-11-19 23:41:54.434671] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:08.337 [2024-11-19 23:41:54.434754] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:08.600 [2024-11-19 23:41:54.596174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.596240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:08.600 [2024-11-19 23:41:54.596260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:08.600 [2024-11-19 23:41:54.596269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.596330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.596341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:08.600 [2024-11-19 23:41:54.596354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:08.600 [2024-11-19 23:41:54.596366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.596395] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:08.600 [2024-11-19 23:41:54.596674] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:08.600 [2024-11-19 23:41:54.596709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.596718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:08.600 [2024-11-19 23:41:54.596727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:29:08.600 [2024-11-19 23:41:54.596757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.597041] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:08.600 [2024-11-19 23:41:54.597083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.597092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:08.600 [2024-11-19 23:41:54.597102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:08.600 [2024-11-19 23:41:54.597114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.597215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.597230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:08.600 [2024-11-19 23:41:54.597239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:08.600 [2024-11-19 23:41:54.597254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.597510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.597538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:08.600 [2024-11-19 23:41:54.597547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:29:08.600 [2024-11-19 23:41:54.597555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.597643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.597653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:08.600 [2024-11-19 23:41:54.597661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:08.600 [2024-11-19 23:41:54.597672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.597694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.597704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:08.600 [2024-11-19 23:41:54.597712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:08.600 [2024-11-19 23:41:54.597719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.600 [2024-11-19 23:41:54.597763] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:08.600 [2024-11-19 23:41:54.599900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.600 [2024-11-19 23:41:54.599943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:08.601 [2024-11-19 23:41:54.599955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:29:08.601 [2024-11-19 23:41:54.599965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.601 [2024-11-19 23:41:54.600002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.601 [2024-11-19 23:41:54.600011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:08.601 [2024-11-19 23:41:54.600020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:08.601 [2024-11-19 23:41:54.600029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.601 [2024-11-19 23:41:54.600085] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:08.601 [2024-11-19 23:41:54.600110] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:08.601 [2024-11-19 23:41:54.600151] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:08.601 [2024-11-19 23:41:54.600169] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:08.601 [2024-11-19 23:41:54.600281] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:08.601 [2024-11-19 23:41:54.600301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:08.601 [2024-11-19 23:41:54.600314] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:08.601 [2024-11-19 23:41:54.600325] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600337] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600349] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:08.601 [2024-11-19 23:41:54.600360] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:08.601 [2024-11-19 23:41:54.600368] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:08.601 [2024-11-19 23:41:54.600375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:08.601 [2024-11-19 23:41:54.600386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.601 [2024-11-19 23:41:54.600394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:08.601 [2024-11-19 23:41:54.600402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:29:08.601 [2024-11-19 23:41:54.600409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.601 [2024-11-19 23:41:54.600494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.601 [2024-11-19 23:41:54.600501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:08.601 [2024-11-19 23:41:54.600515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:08.601 [2024-11-19 23:41:54.600522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.601 [2024-11-19 23:41:54.600630] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:08.601 [2024-11-19 23:41:54.600647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:08.601 [2024-11-19 23:41:54.600658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:08.601 [2024-11-19 23:41:54.600681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:08.601 [2024-11-19 23:41:54.600709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:08.601 [2024-11-19 23:41:54.600723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:08.601 [2024-11-19 23:41:54.600745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:08.601 [2024-11-19 23:41:54.600752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:08.601 [2024-11-19 23:41:54.600760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:08.601 [2024-11-19 23:41:54.600767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:08.601 [2024-11-19 23:41:54.600774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:08.601 [2024-11-19 23:41:54.600788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:08.601 [2024-11-19 23:41:54.600813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:08.601 [2024-11-19 23:41:54.600835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:08.601 [2024-11-19 23:41:54.600856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:08.601 [2024-11-19 23:41:54.600876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:08.601 [2024-11-19 23:41:54.600890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:08.601 [2024-11-19 23:41:54.600897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:08.601 [2024-11-19 23:41:54.600917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:08.601 [2024-11-19 23:41:54.600924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:08.601 [2024-11-19 23:41:54.600930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:08.601 [2024-11-19 23:41:54.600938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:08.601 [2024-11-19 23:41:54.600945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:08.601 [2024-11-19 23:41:54.600952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:08.601 [2024-11-19 23:41:54.600965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:08.601 [2024-11-19 23:41:54.600972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.600978] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:08.601 [2024-11-19 23:41:54.600986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:08.601 [2024-11-19 23:41:54.600993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:08.601 [2024-11-19 23:41:54.601001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:08.601 [2024-11-19 23:41:54.601011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:08.601 [2024-11-19 23:41:54.601018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:08.601 [2024-11-19 23:41:54.601025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:08.601 [2024-11-19 23:41:54.601035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:08.601 [2024-11-19 23:41:54.601041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:08.601 [2024-11-19 23:41:54.601050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:08.601 [2024-11-19 23:41:54.601059] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:08.601 [2024-11-19 23:41:54.601071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.601 [2024-11-19 23:41:54.601084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:08.601 [2024-11-19 23:41:54.601091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:08.601 [2024-11-19 23:41:54.601099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:08.601 [2024-11-19 23:41:54.601106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:08.601 [2024-11-19 23:41:54.601114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:08.601 [2024-11-19 23:41:54.601120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:08.601 [2024-11-19 23:41:54.601128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:08.601 [2024-11-19 23:41:54.601135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:08.601 [2024-11-19 23:41:54.601143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:08.601 [2024-11-19 23:41:54.601150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:08.601 [2024-11-19 23:41:54.601157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:08.601 [2024-11-19 23:41:54.601167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:08.601 [2024-11-19 23:41:54.601175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:08.601 [2024-11-19 23:41:54.601182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:08.601 [2024-11-19 23:41:54.601189] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:08.602 [2024-11-19 23:41:54.601198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.602 [2024-11-19 23:41:54.601205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:08.602 [2024-11-19 23:41:54.601213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:08.602 [2024-11-19 23:41:54.601220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:08.602 [2024-11-19 23:41:54.601227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:08.602 [2024-11-19 23:41:54.601234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.601242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:08.602 [2024-11-19 23:41:54.601250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:29:08.602 [2024-11-19 23:41:54.601257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.611358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.611561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:08.602 [2024-11-19 23:41:54.611581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.059 ms 00:29:08.602 [2024-11-19 23:41:54.611589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.611679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.611689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:08.602 [2024-11-19 23:41:54.611698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:08.602 [2024-11-19 23:41:54.611714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.632111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.632175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:08.602 [2024-11-19 23:41:54.632188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.281 ms 00:29:08.602 [2024-11-19 23:41:54.632196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.632242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.632252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:08.602 [2024-11-19 23:41:54.632268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:08.602 [2024-11-19 23:41:54.632275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.632381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.632392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:08.602 [2024-11-19 23:41:54.632412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:08.602 [2024-11-19 23:41:54.632424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.632543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.632555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:08.602 [2024-11-19 23:41:54.632563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:08.602 [2024-11-19 23:41:54.632571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.640180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.640226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:08.602 [2024-11-19 23:41:54.640237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.588 ms 00:29:08.602 [2024-11-19 23:41:54.640252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.640381] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:08.602 [2024-11-19 23:41:54.640394] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:08.602 [2024-11-19 23:41:54.640405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.640413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:08.602 [2024-11-19 23:41:54.640422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:08.602 [2024-11-19 23:41:54.640431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.653184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.653230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:08.602 [2024-11-19 23:41:54.653248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.732 ms 00:29:08.602 [2024-11-19 23:41:54.653255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.653387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.653397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:08.602 [2024-11-19 23:41:54.653405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:29:08.602 [2024-11-19 23:41:54.653412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.653468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.653484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:08.602 [2024-11-19 23:41:54.653495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:08.602 [2024-11-19 23:41:54.653503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.653862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.653881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:08.602 [2024-11-19 23:41:54.653893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:29:08.602 [2024-11-19 23:41:54.653900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.653919] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:08.602 [2024-11-19 23:41:54.653929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.653938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:08.602 [2024-11-19 23:41:54.653949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:08.602 [2024-11-19 23:41:54.653958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.663324] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:08.602 [2024-11-19 23:41:54.663665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.663684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:08.602 [2024-11-19 23:41:54.663701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.688 ms 00:29:08.602 [2024-11-19 23:41:54.663709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.666238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.666280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:08.602 [2024-11-19 23:41:54.666291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.464 ms 00:29:08.602 [2024-11-19 23:41:54.666301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.666401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.666413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:08.602 [2024-11-19 23:41:54.666422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:08.602 [2024-11-19 23:41:54.666430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.666457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.666466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:08.602 [2024-11-19 23:41:54.666474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:08.602 [2024-11-19 23:41:54.666481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.666517] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:08.602 [2024-11-19 23:41:54.666532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.666539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:08.602 [2024-11-19 23:41:54.666546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:08.602 [2024-11-19 23:41:54.666553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.673661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.673874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:08.602 [2024-11-19 23:41:54.673944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.085 ms 00:29:08.602 [2024-11-19 23:41:54.673979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.674072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:08.602 [2024-11-19 23:41:54.674099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:08.602 [2024-11-19 23:41:54.674119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:08.602 [2024-11-19 23:41:54.674140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:08.602 [2024-11-19 23:41:54.675339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 78.703 ms, result 0 00:29:09.547  [2024-11-19T23:41:56.689Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-19T23:41:58.093Z] Copying: 25/1024 [MB] (10 MBps) [2024-11-19T23:41:59.039Z] Copying: 36/1024 [MB] (10 MBps) [2024-11-19T23:41:59.982Z] Copying: 49/1024 [MB] (13 MBps) [2024-11-19T23:42:00.927Z] Copying: 62/1024 [MB] (13 MBps) [2024-11-19T23:42:01.870Z] Copying: 74/1024 [MB] (11 MBps) [2024-11-19T23:42:02.814Z] Copying: 84/1024 [MB] (10 MBps) [2024-11-19T23:42:03.759Z] Copying: 94/1024 [MB] (10 MBps) [2024-11-19T23:42:04.707Z] Copying: 109/1024 [MB] (14 MBps) [2024-11-19T23:42:05.704Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-19T23:42:07.093Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-19T23:42:08.044Z] Copying: 141/1024 [MB] (10 MBps) [2024-11-19T23:42:08.989Z] Copying: 187/1024 [MB] (45 MBps) [2024-11-19T23:42:09.933Z] Copying: 229/1024 [MB] (42 MBps) [2024-11-19T23:42:10.878Z] Copying: 256/1024 [MB] (27 MBps) [2024-11-19T23:42:11.828Z] Copying: 267/1024 [MB] (11 MBps) [2024-11-19T23:42:12.771Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-19T23:42:13.715Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-19T23:42:15.104Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-19T23:42:16.045Z] Copying: 328/1024 [MB] (21 MBps) [2024-11-19T23:42:16.984Z] Copying: 348/1024 [MB] (20 MBps) [2024-11-19T23:42:17.928Z] Copying: 371/1024 [MB] (23 MBps) [2024-11-19T23:42:18.866Z] Copying: 387/1024 [MB] (15 MBps) [2024-11-19T23:42:19.802Z] Copying: 407/1024 [MB] (20 MBps) [2024-11-19T23:42:20.745Z] Copying: 420/1024 [MB] (12 MBps) [2024-11-19T23:42:22.122Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-19T23:42:22.696Z] Copying: 443/1024 [MB] (12 MBps) [2024-11-19T23:42:24.081Z] Copying: 456/1024 [MB] (12 MBps) [2024-11-19T23:42:25.026Z] Copying: 466/1024 [MB] (10 MBps) [2024-11-19T23:42:25.966Z] Copying: 478/1024 [MB] (12 MBps) [2024-11-19T23:42:26.901Z] Copying: 493/1024 [MB] (14 MBps) [2024-11-19T23:42:27.837Z] Copying: 506/1024 [MB] (12 MBps) [2024-11-19T23:42:28.782Z] Copying: 519/1024 [MB] (13 MBps) [2024-11-19T23:42:29.738Z] Copying: 530/1024 [MB] (10 MBps) [2024-11-19T23:42:31.118Z] Copying: 540/1024 [MB] (10 MBps) [2024-11-19T23:42:31.689Z] Copying: 552/1024 [MB] (11 MBps) [2024-11-19T23:42:33.076Z] Copying: 565/1024 [MB] (12 MBps) [2024-11-19T23:42:34.026Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-19T23:42:34.969Z] Copying: 586/1024 [MB] (10 MBps) [2024-11-19T23:42:35.918Z] Copying: 621/1024 [MB] (35 MBps) [2024-11-19T23:42:36.888Z] Copying: 632/1024 [MB] (11 MBps) [2024-11-19T23:42:37.831Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-19T23:42:38.774Z] Copying: 675/1024 [MB] (32 MBps) [2024-11-19T23:42:39.731Z] Copying: 697/1024 [MB] (21 MBps) [2024-11-19T23:42:41.116Z] Copying: 716/1024 [MB] (18 MBps) [2024-11-19T23:42:41.689Z] Copying: 737/1024 [MB] (21 MBps) [2024-11-19T23:42:43.076Z] Copying: 756/1024 [MB] (19 MBps) [2024-11-19T23:42:44.020Z] Copying: 775/1024 [MB] (18 MBps) [2024-11-19T23:42:44.964Z] Copying: 794/1024 [MB] (18 MBps) [2024-11-19T23:42:45.907Z] Copying: 812/1024 [MB] (17 MBps) [2024-11-19T23:42:46.852Z] Copying: 827/1024 [MB] (15 MBps) [2024-11-19T23:42:47.796Z] Copying: 838/1024 [MB] (10 MBps) [2024-11-19T23:42:48.740Z] Copying: 849/1024 [MB] (10 MBps) [2024-11-19T23:42:50.129Z] Copying: 859/1024 [MB] (10 MBps) [2024-11-19T23:42:50.702Z] Copying: 873/1024 [MB] (14 MBps) [2024-11-19T23:42:52.087Z] Copying: 902/1024 [MB] (28 MBps) [2024-11-19T23:42:53.030Z] Copying: 937/1024 [MB] (35 MBps) [2024-11-19T23:42:53.972Z] Copying: 983/1024 [MB] (45 MBps) [2024-11-19T23:42:54.915Z] Copying: 1001/1024 [MB] (17 MBps) [2024-11-19T23:42:55.491Z] Copying: 1023/1024 [MB] (22 MBps) [2024-11-19T23:42:55.491Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 23:42:55.372811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.299 [2024-11-19 23:42:55.372870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:09.299 [2024-11-19 23:42:55.372884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:09.299 [2024-11-19 23:42:55.372893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.299 [2024-11-19 23:42:55.375785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:09.299 [2024-11-19 23:42:55.377515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.299 [2024-11-19 23:42:55.377550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:09.299 [2024-11-19 23:42:55.377560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:30:09.299 [2024-11-19 23:42:55.377576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.299 [2024-11-19 23:42:55.387745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.299 [2024-11-19 23:42:55.387781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:09.299 [2024-11-19 23:42:55.387792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.980 ms 00:30:09.299 [2024-11-19 23:42:55.387799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.299 [2024-11-19 23:42:55.387824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.299 [2024-11-19 23:42:55.387833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:09.299 [2024-11-19 23:42:55.387842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:09.299 [2024-11-19 23:42:55.387849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.299 [2024-11-19 23:42:55.387891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.299 [2024-11-19 23:42:55.387900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:09.299 [2024-11-19 23:42:55.387910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:09.299 [2024-11-19 23:42:55.387917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.299 [2024-11-19 23:42:55.387929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:09.299 [2024-11-19 23:42:55.387940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:30:09.299 [2024-11-19 23:42:55.387949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.387957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.387964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.387971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.387979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.387986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.387994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:09.299 [2024-11-19 23:42:55.388359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:09.300 [2024-11-19 23:42:55.388684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:09.300 [2024-11-19 23:42:55.388694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d249e450-9437-4263-84c5-287c10346aed 00:30:09.300 [2024-11-19 23:42:55.388702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:30:09.300 [2024-11-19 23:42:55.388709] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:30:09.300 [2024-11-19 23:42:55.388716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:30:09.300 [2024-11-19 23:42:55.388724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:30:09.300 [2024-11-19 23:42:55.388744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:09.300 [2024-11-19 23:42:55.388755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:09.300 [2024-11-19 23:42:55.388762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:09.300 [2024-11-19 23:42:55.388770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:09.300 [2024-11-19 23:42:55.388776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:09.300 [2024-11-19 23:42:55.388783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.300 [2024-11-19 23:42:55.388790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:09.300 [2024-11-19 23:42:55.388798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:30:09.300 [2024-11-19 23:42:55.388809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.390271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.300 [2024-11-19 23:42:55.390294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:09.300 [2024-11-19 23:42:55.390311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:30:09.300 [2024-11-19 23:42:55.390321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.390401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.300 [2024-11-19 23:42:55.390409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:09.300 [2024-11-19 23:42:55.390421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:09.300 [2024-11-19 23:42:55.390428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.395151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.395256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.300 [2024-11-19 23:42:55.395310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.395333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.395395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.395495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.300 [2024-11-19 23:42:55.395518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.395537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.395579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.395600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.300 [2024-11-19 23:42:55.395655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.395677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.395724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.395761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.300 [2024-11-19 23:42:55.395780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.395799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.404126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.404257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.300 [2024-11-19 23:42:55.404318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.404339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.411831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.411966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.300 [2024-11-19 23:42:55.412022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.412045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.412152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.412179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.300 [2024-11-19 23:42:55.412227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.412249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.412417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.412442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.300 [2024-11-19 23:42:55.412462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.300 [2024-11-19 23:42:55.412514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.300 [2024-11-19 23:42:55.412582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.300 [2024-11-19 23:42:55.412616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.300 [2024-11-19 23:42:55.412636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.301 [2024-11-19 23:42:55.412691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.301 [2024-11-19 23:42:55.412746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.301 [2024-11-19 23:42:55.412778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:09.301 [2024-11-19 23:42:55.412847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.301 [2024-11-19 23:42:55.412894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.301 [2024-11-19 23:42:55.412943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.301 [2024-11-19 23:42:55.412995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.301 [2024-11-19 23:42:55.413019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.301 [2024-11-19 23:42:55.413067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.301 [2024-11-19 23:42:55.413129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.301 [2024-11-19 23:42:55.413154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.301 [2024-11-19 23:42:55.413207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.301 [2024-11-19 23:42:55.413229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.301 [2024-11-19 23:42:55.413359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.797 ms, result 0 00:30:10.685 00:30:10.685 00:30:10.685 23:42:56 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:10.685 [2024-11-19 23:42:56.566790] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:30:10.685 [2024-11-19 23:42:56.566933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94354 ] 00:30:10.685 [2024-11-19 23:42:56.730105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.685 [2024-11-19 23:42:56.758945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.685 [2024-11-19 23:42:56.868058] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.685 [2024-11-19 23:42:56.868130] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:10.947 [2024-11-19 23:42:57.030167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.947 [2024-11-19 23:42:57.030233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:10.947 [2024-11-19 23:42:57.030249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:10.947 [2024-11-19 23:42:57.030258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.947 [2024-11-19 23:42:57.030315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.947 [2024-11-19 23:42:57.030327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:10.947 [2024-11-19 23:42:57.030341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:10.947 [2024-11-19 23:42:57.030349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.947 [2024-11-19 23:42:57.030372] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:10.947 [2024-11-19 23:42:57.030661] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:10.947 [2024-11-19 23:42:57.030681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.947 [2024-11-19 23:42:57.030691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:10.947 [2024-11-19 23:42:57.030701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:30:10.947 [2024-11-19 23:42:57.030715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.947 [2024-11-19 23:42:57.030993] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:10.947 [2024-11-19 23:42:57.031018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.947 [2024-11-19 23:42:57.031029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:10.948 [2024-11-19 23:42:57.031039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:10.948 [2024-11-19 23:42:57.031047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.031166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.031184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:10.948 [2024-11-19 23:42:57.031193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:10.948 [2024-11-19 23:42:57.031201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.031632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.031646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:10.948 [2024-11-19 23:42:57.031659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:30:10.948 [2024-11-19 23:42:57.031667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.032030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.032071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:10.948 [2024-11-19 23:42:57.032083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:10.948 [2024-11-19 23:42:57.032090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.032123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.032132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:10.948 [2024-11-19 23:42:57.032142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:10.948 [2024-11-19 23:42:57.032150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.032176] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:10.948 [2024-11-19 23:42:57.034370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.034418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:10.948 [2024-11-19 23:42:57.034431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:30:10.948 [2024-11-19 23:42:57.034455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.034496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.034505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:10.948 [2024-11-19 23:42:57.034515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:10.948 [2024-11-19 23:42:57.034524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.034582] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:10.948 [2024-11-19 23:42:57.034607] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:10.948 [2024-11-19 23:42:57.034649] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:10.948 [2024-11-19 23:42:57.034667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:10.948 [2024-11-19 23:42:57.034801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:10.948 [2024-11-19 23:42:57.034816] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:10.948 [2024-11-19 23:42:57.034829] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:10.948 [2024-11-19 23:42:57.034841] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:10.948 [2024-11-19 23:42:57.034852] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:10.948 [2024-11-19 23:42:57.034867] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:10.948 [2024-11-19 23:42:57.034882] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:10.948 [2024-11-19 23:42:57.034891] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:10.948 [2024-11-19 23:42:57.034899] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:10.948 [2024-11-19 23:42:57.034908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.034922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:10.948 [2024-11-19 23:42:57.034933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:30:10.948 [2024-11-19 23:42:57.034944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.035027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.948 [2024-11-19 23:42:57.035038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:10.948 [2024-11-19 23:42:57.035053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:10.948 [2024-11-19 23:42:57.035062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.948 [2024-11-19 23:42:57.035163] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:10.948 [2024-11-19 23:42:57.035182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:10.948 [2024-11-19 23:42:57.035193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:10.948 [2024-11-19 23:42:57.035223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:10.948 [2024-11-19 23:42:57.035255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:10.948 [2024-11-19 23:42:57.035269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:10.948 [2024-11-19 23:42:57.035276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:10.948 [2024-11-19 23:42:57.035284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:10.948 [2024-11-19 23:42:57.035293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:10.948 [2024-11-19 23:42:57.035301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:10.948 [2024-11-19 23:42:57.035308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:10.948 [2024-11-19 23:42:57.035322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:10.948 [2024-11-19 23:42:57.035343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:10.948 [2024-11-19 23:42:57.035366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:10.948 [2024-11-19 23:42:57.035386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:10.948 [2024-11-19 23:42:57.035407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:10.948 [2024-11-19 23:42:57.035426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:10.948 [2024-11-19 23:42:57.035439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:10.948 [2024-11-19 23:42:57.035445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:10.948 [2024-11-19 23:42:57.035452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:10.948 [2024-11-19 23:42:57.035463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:10.948 [2024-11-19 23:42:57.035469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:10.948 [2024-11-19 23:42:57.035475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:10.948 [2024-11-19 23:42:57.035488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:10.948 [2024-11-19 23:42:57.035495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035501] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:10.948 [2024-11-19 23:42:57.035509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:10.948 [2024-11-19 23:42:57.035522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:10.948 [2024-11-19 23:42:57.035541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:10.948 [2024-11-19 23:42:57.035548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:10.948 [2024-11-19 23:42:57.035555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:10.948 [2024-11-19 23:42:57.035562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:10.948 [2024-11-19 23:42:57.035569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:10.948 [2024-11-19 23:42:57.035576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:10.948 [2024-11-19 23:42:57.035586] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:10.949 [2024-11-19 23:42:57.035597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:10.949 [2024-11-19 23:42:57.035617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:10.949 [2024-11-19 23:42:57.035625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:10.949 [2024-11-19 23:42:57.035631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:10.949 [2024-11-19 23:42:57.035638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:10.949 [2024-11-19 23:42:57.035647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:10.949 [2024-11-19 23:42:57.035655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:10.949 [2024-11-19 23:42:57.035663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:10.949 [2024-11-19 23:42:57.035669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:10.949 [2024-11-19 23:42:57.035676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:10.949 [2024-11-19 23:42:57.035768] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:10.949 [2024-11-19 23:42:57.035781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:10.949 [2024-11-19 23:42:57.035798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:10.949 [2024-11-19 23:42:57.035805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:10.949 [2024-11-19 23:42:57.035813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:10.949 [2024-11-19 23:42:57.035822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.035830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:10.949 [2024-11-19 23:42:57.035839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:30:10.949 [2024-11-19 23:42:57.035847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.046445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.046641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:10.949 [2024-11-19 23:42:57.046710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.553 ms 00:30:10.949 [2024-11-19 23:42:57.046760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.046870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.046895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:10.949 [2024-11-19 23:42:57.046920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:10.949 [2024-11-19 23:42:57.046995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.066157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.066368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:10.949 [2024-11-19 23:42:57.066446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.081 ms 00:30:10.949 [2024-11-19 23:42:57.066472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.066532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.066558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:10.949 [2024-11-19 23:42:57.066587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:10.949 [2024-11-19 23:42:57.066611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.066757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.066795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:10.949 [2024-11-19 23:42:57.066821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:10.949 [2024-11-19 23:42:57.066843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.066984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.067065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:10.949 [2024-11-19 23:42:57.067087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:30:10.949 [2024-11-19 23:42:57.067107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.074918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.075080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:10.949 [2024-11-19 23:42:57.075137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.669 ms 00:30:10.949 [2024-11-19 23:42:57.075168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.075312] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:10.949 [2024-11-19 23:42:57.075354] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:10.949 [2024-11-19 23:42:57.075387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.075471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:10.949 [2024-11-19 23:42:57.075497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:10.949 [2024-11-19 23:42:57.075522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.088028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.088184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:10.949 [2024-11-19 23:42:57.088246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.470 ms 00:30:10.949 [2024-11-19 23:42:57.088268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.088424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.088454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:10.949 [2024-11-19 23:42:57.088531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:10.949 [2024-11-19 23:42:57.088555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.088632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.088657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:10.949 [2024-11-19 23:42:57.088683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:10.949 [2024-11-19 23:42:57.088785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.089127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.089236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:10.949 [2024-11-19 23:42:57.089298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:30:10.949 [2024-11-19 23:42:57.089322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.089357] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:10.949 [2024-11-19 23:42:57.089432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.089456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:10.949 [2024-11-19 23:42:57.089480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:10.949 [2024-11-19 23:42:57.089554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.099173] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:10.949 [2024-11-19 23:42:57.099465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.099547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:10.949 [2024-11-19 23:42:57.099603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.869 ms 00:30:10.949 [2024-11-19 23:42:57.099627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.102411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.102570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:10.949 [2024-11-19 23:42:57.102588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.729 ms 00:30:10.949 [2024-11-19 23:42:57.102597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.949 [2024-11-19 23:42:57.102694] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:10.949 [2024-11-19 23:42:57.103340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.949 [2024-11-19 23:42:57.103365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:10.950 [2024-11-19 23:42:57.103377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:30:10.950 [2024-11-19 23:42:57.103390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.950 [2024-11-19 23:42:57.103420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.950 [2024-11-19 23:42:57.103429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:10.950 [2024-11-19 23:42:57.103442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:10.950 [2024-11-19 23:42:57.103451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.950 [2024-11-19 23:42:57.103489] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:10.950 [2024-11-19 23:42:57.103500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.950 [2024-11-19 23:42:57.103508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:10.950 [2024-11-19 23:42:57.103517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:10.950 [2024-11-19 23:42:57.103525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.950 [2024-11-19 23:42:57.110417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.950 [2024-11-19 23:42:57.110485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:10.950 [2024-11-19 23:42:57.110500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:30:10.950 [2024-11-19 23:42:57.110509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.950 [2024-11-19 23:42:57.110610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:10.950 [2024-11-19 23:42:57.110621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:10.950 [2024-11-19 23:42:57.110634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:10.950 [2024-11-19 23:42:57.110643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:10.950 [2024-11-19 23:42:57.111991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.303 ms, result 0 00:30:12.336  [2024-11-19T23:42:59.470Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-19T23:43:00.412Z] Copying: 24/1024 [MB] (14 MBps) [2024-11-19T23:43:01.356Z] Copying: 39/1024 [MB] (14 MBps) [2024-11-19T23:43:02.741Z] Copying: 59/1024 [MB] (20 MBps) [2024-11-19T23:43:03.314Z] Copying: 79/1024 [MB] (19 MBps) [2024-11-19T23:43:04.703Z] Copying: 101/1024 [MB] (22 MBps) [2024-11-19T23:43:05.646Z] Copying: 118/1024 [MB] (16 MBps) [2024-11-19T23:43:06.589Z] Copying: 133/1024 [MB] (15 MBps) [2024-11-19T23:43:07.532Z] Copying: 172/1024 [MB] (38 MBps) [2024-11-19T23:43:08.573Z] Copying: 191/1024 [MB] (18 MBps) [2024-11-19T23:43:09.516Z] Copying: 208/1024 [MB] (16 MBps) [2024-11-19T23:43:10.473Z] Copying: 221/1024 [MB] (13 MBps) [2024-11-19T23:43:11.418Z] Copying: 236/1024 [MB] (14 MBps) [2024-11-19T23:43:12.360Z] Copying: 252/1024 [MB] (16 MBps) [2024-11-19T23:43:13.309Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-19T23:43:14.698Z] Copying: 273/1024 [MB] (10 MBps) [2024-11-19T23:43:15.643Z] Copying: 296/1024 [MB] (22 MBps) [2024-11-19T23:43:16.588Z] Copying: 314/1024 [MB] (17 MBps) [2024-11-19T23:43:17.531Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-19T23:43:18.473Z] Copying: 336/1024 [MB] (10 MBps) [2024-11-19T23:43:19.425Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-19T23:43:20.368Z] Copying: 358/1024 [MB] (11 MBps) [2024-11-19T23:43:21.321Z] Copying: 372/1024 [MB] (13 MBps) [2024-11-19T23:43:22.708Z] Copying: 382/1024 [MB] (10 MBps) [2024-11-19T23:43:23.652Z] Copying: 393/1024 [MB] (10 MBps) [2024-11-19T23:43:24.595Z] Copying: 403/1024 [MB] (10 MBps) [2024-11-19T23:43:25.537Z] Copying: 414/1024 [MB] (10 MBps) [2024-11-19T23:43:26.484Z] Copying: 424/1024 [MB] (10 MBps) [2024-11-19T23:43:27.428Z] Copying: 437/1024 [MB] (12 MBps) [2024-11-19T23:43:28.370Z] Copying: 448/1024 [MB] (11 MBps) [2024-11-19T23:43:29.318Z] Copying: 459/1024 [MB] (10 MBps) [2024-11-19T23:43:30.704Z] Copying: 470/1024 [MB] (10 MBps) [2024-11-19T23:43:31.642Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-19T23:43:32.586Z] Copying: 495/1024 [MB] (13 MBps) [2024-11-19T23:43:33.541Z] Copying: 506/1024 [MB] (10 MBps) [2024-11-19T23:43:34.515Z] Copying: 517/1024 [MB] (10 MBps) [2024-11-19T23:43:35.468Z] Copying: 532/1024 [MB] (15 MBps) [2024-11-19T23:43:36.408Z] Copying: 549/1024 [MB] (16 MBps) [2024-11-19T23:43:37.351Z] Copying: 563/1024 [MB] (13 MBps) [2024-11-19T23:43:38.740Z] Copying: 578/1024 [MB] (15 MBps) [2024-11-19T23:43:39.313Z] Copying: 596/1024 [MB] (17 MBps) [2024-11-19T23:43:40.336Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-19T23:43:41.721Z] Copying: 623/1024 [MB] (11 MBps) [2024-11-19T23:43:42.667Z] Copying: 637/1024 [MB] (14 MBps) [2024-11-19T23:43:43.612Z] Copying: 651/1024 [MB] (13 MBps) [2024-11-19T23:43:44.558Z] Copying: 670/1024 [MB] (19 MBps) [2024-11-19T23:43:45.501Z] Copying: 692/1024 [MB] (21 MBps) [2024-11-19T23:43:46.446Z] Copying: 708/1024 [MB] (15 MBps) [2024-11-19T23:43:47.391Z] Copying: 736/1024 [MB] (27 MBps) [2024-11-19T23:43:48.335Z] Copying: 753/1024 [MB] (17 MBps) [2024-11-19T23:43:49.723Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-19T23:43:50.670Z] Copying: 787/1024 [MB] (18 MBps) [2024-11-19T23:43:51.612Z] Copying: 817/1024 [MB] (29 MBps) [2024-11-19T23:43:52.557Z] Copying: 836/1024 [MB] (19 MBps) [2024-11-19T23:43:53.499Z] Copying: 865/1024 [MB] (28 MBps) [2024-11-19T23:43:54.444Z] Copying: 884/1024 [MB] (19 MBps) [2024-11-19T23:43:55.386Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-19T23:43:56.330Z] Copying: 914/1024 [MB] (18 MBps) [2024-11-19T23:43:57.717Z] Copying: 941/1024 [MB] (27 MBps) [2024-11-19T23:43:58.662Z] Copying: 959/1024 [MB] (17 MBps) [2024-11-19T23:43:59.607Z] Copying: 975/1024 [MB] (15 MBps) [2024-11-19T23:44:00.551Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-19T23:44:00.812Z] Copying: 1015/1024 [MB] (29 MBps) [2024-11-19T23:44:01.074Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-19 23:44:00.875512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.882 [2024-11-19 23:44:00.875926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:14.882 [2024-11-19 23:44:00.876021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:14.882 [2024-11-19 23:44:00.876063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.882 [2024-11-19 23:44:00.876120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:14.882 [2024-11-19 23:44:00.877113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.882 [2024-11-19 23:44:00.877277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:14.882 [2024-11-19 23:44:00.877488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:31:14.882 [2024-11-19 23:44:00.877532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.882 [2024-11-19 23:44:00.877823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.882 [2024-11-19 23:44:00.877983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:14.882 [2024-11-19 23:44:00.878009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:31:14.882 [2024-11-19 23:44:00.878030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.882 [2024-11-19 23:44:00.878072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.882 [2024-11-19 23:44:00.878095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:14.882 [2024-11-19 23:44:00.878117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:14.882 [2024-11-19 23:44:00.878138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.882 [2024-11-19 23:44:00.878212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.882 [2024-11-19 23:44:00.878240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:14.882 [2024-11-19 23:44:00.878610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:14.882 [2024-11-19 23:44:00.878651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.882 [2024-11-19 23:44:00.878768] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:14.882 [2024-11-19 23:44:00.878810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:14.882 [2024-11-19 23:44:00.878907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.878940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.878970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:14.882 [2024-11-19 23:44:00.879638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.879848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.879948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.880998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:14.883 [2024-11-19 23:44:00.881370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:14.884 [2024-11-19 23:44:00.881446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:14.884 [2024-11-19 23:44:00.881456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d249e450-9437-4263-84c5-287c10346aed 00:31:14.884 [2024-11-19 23:44:00.881465] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:14.884 [2024-11-19 23:44:00.881481] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:31:14.884 [2024-11-19 23:44:00.881489] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:31:14.884 [2024-11-19 23:44:00.881499] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:31:14.884 [2024-11-19 23:44:00.881512] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:14.884 [2024-11-19 23:44:00.881521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:14.884 [2024-11-19 23:44:00.881529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:14.884 [2024-11-19 23:44:00.881535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:14.884 [2024-11-19 23:44:00.881542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:14.884 [2024-11-19 23:44:00.881553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.884 [2024-11-19 23:44:00.881563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:14.884 [2024-11-19 23:44:00.881572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:31:14.884 [2024-11-19 23:44:00.881584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.884441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.884 [2024-11-19 23:44:00.884578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:14.884 [2024-11-19 23:44:00.884646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:31:14.884 [2024-11-19 23:44:00.884670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.884821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:14.884 [2024-11-19 23:44:00.884851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:14.884 [2024-11-19 23:44:00.884873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:31:14.884 [2024-11-19 23:44:00.884893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.892528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.892690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:14.884 [2024-11-19 23:44:00.892759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.892784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.892858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.892881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:14.884 [2024-11-19 23:44:00.892902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.892924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.892998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.893025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:14.884 [2024-11-19 23:44:00.893055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.893144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.893179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.893215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:14.884 [2024-11-19 23:44:00.893276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.893307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.907010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.907202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:14.884 [2024-11-19 23:44:00.907257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.907279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.918966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.919154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:14.884 [2024-11-19 23:44:00.919217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.919241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.919305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.919331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:14.884 [2024-11-19 23:44:00.919353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.919379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.919428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.919450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:14.884 [2024-11-19 23:44:00.919472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.919588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.919672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.919724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:14.884 [2024-11-19 23:44:00.919771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.919791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.919837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.920027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:14.884 [2024-11-19 23:44:00.920051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.920074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.920129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.920154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:14.884 [2024-11-19 23:44:00.920175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.920259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.920338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:14.884 [2024-11-19 23:44:00.920397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:14.884 [2024-11-19 23:44:00.920422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:14.884 [2024-11-19 23:44:00.920466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:14.884 [2024-11-19 23:44:00.920639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 45.081 ms, result 0 00:31:15.148 00:31:15.148 00:31:15.148 23:44:01 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:17.063 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:17.063 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:17.063 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:17.063 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:17.325 Process with pid 92224 is not found 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92224 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 92224 ']' 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 92224 00:31:17.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (92224) - No such process 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 92224 is not found' 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:17.325 Remove shared memory files 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_band_md /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_l2p_l1 /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_l2p_l2 /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_l2p_l2_ctx /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_nvc_md /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_p2l_pool /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_sb /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_sb_shm /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_trim_bitmap /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_trim_log /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_trim_md /dev/hugepages/ftl_d249e450-9437-4263-84c5-287c10346aed_vmap 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:17.325 ************************************ 00:31:17.325 END TEST ftl_restore_fast 00:31:17.325 ************************************ 00:31:17.325 00:31:17.325 real 4m36.268s 00:31:17.325 user 4m22.864s 00:31:17.325 sys 0m13.051s 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.325 23:44:03 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:17.325 23:44:03 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:17.325 23:44:03 ftl -- ftl/ftl.sh@14 -- # killprocess 83774 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 83774 ']' 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@958 -- # kill -0 83774 00:31:17.325 Process with pid 83774 is not found 00:31:17.325 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83774) - No such process 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 83774 is not found' 00:31:17.325 23:44:03 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:17.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.325 23:44:03 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95043 00:31:17.325 23:44:03 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95043 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@835 -- # '[' -z 95043 ']' 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.325 23:44:03 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.325 23:44:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:17.325 [2024-11-19 23:44:03.425766] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 23.11.0 initialization... 00:31:17.325 [2024-11-19 23:44:03.425915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95043 ] 00:31:17.586 [2024-11-19 23:44:03.585250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.586 [2024-11-19 23:44:03.613724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.160 23:44:04 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.160 23:44:04 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:18.160 23:44:04 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:18.421 nvme0n1 00:31:18.421 23:44:04 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:18.421 23:44:04 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:18.421 23:44:04 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:18.682 23:44:04 ftl -- ftl/common.sh@28 -- # stores=c3d67121-0071-4e8b-a402-64e620f1d9bc 00:31:18.682 23:44:04 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:18.682 23:44:04 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3d67121-0071-4e8b-a402-64e620f1d9bc 00:31:18.942 23:44:05 ftl -- ftl/ftl.sh@23 -- # killprocess 95043 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 95043 ']' 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@958 -- # kill -0 95043 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@959 -- # uname 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95043 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95043' 00:31:18.942 killing process with pid 95043 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@973 -- # kill 95043 00:31:18.942 23:44:05 ftl -- common/autotest_common.sh@978 -- # wait 95043 00:31:19.202 23:44:05 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:19.463 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:19.463 Waiting for block devices as requested 00:31:19.737 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:19.737 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:19.737 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:19.737 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:25.035 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:25.035 Remove shared memory files 00:31:25.035 23:44:11 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:25.035 23:44:11 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:25.035 23:44:11 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:25.035 23:44:11 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:25.035 23:44:11 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:25.035 23:44:11 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:25.035 23:44:11 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:25.035 ************************************ 00:31:25.035 END TEST ftl 00:31:25.035 ************************************ 00:31:25.035 00:31:25.035 real 16m54.484s 00:31:25.035 user 18m50.415s 00:31:25.035 sys 1m23.145s 00:31:25.035 23:44:11 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.035 23:44:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:25.035 23:44:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:25.035 23:44:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:25.035 23:44:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:25.036 23:44:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:25.036 23:44:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:25.036 23:44:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:25.036 23:44:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:25.036 23:44:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:25.036 23:44:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:25.036 23:44:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:25.036 23:44:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.036 23:44:11 -- common/autotest_common.sh@10 -- # set +x 00:31:25.036 23:44:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:25.036 23:44:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:25.036 23:44:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:25.036 23:44:11 -- common/autotest_common.sh@10 -- # set +x 00:31:26.421 INFO: APP EXITING 00:31:26.421 INFO: killing all VMs 00:31:26.421 INFO: killing vhost app 00:31:26.421 INFO: EXIT DONE 00:31:26.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.320 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:27.320 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:27.320 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:27.320 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:27.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:28.213 Cleaning 00:31:28.213 Removing: /var/run/dpdk/spdk0/config 00:31:28.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:28.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:28.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:28.213 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:28.213 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:28.213 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:28.213 Removing: /var/run/dpdk/spdk0 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69259 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69417 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69624 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69706 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69729 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69841 00:31:28.213 Removing: /var/run/dpdk/spdk_pid69859 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70036 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70109 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70194 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70289 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70369 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70409 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70440 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70510 00:31:28.213 Removing: /var/run/dpdk/spdk_pid70611 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71025 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71072 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71119 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71135 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71193 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71209 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71260 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71272 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71318 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71332 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71374 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71392 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71519 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71561 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71639 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71800 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71862 00:31:28.213 Removing: /var/run/dpdk/spdk_pid71893 00:31:28.213 Removing: /var/run/dpdk/spdk_pid72303 00:31:28.213 Removing: /var/run/dpdk/spdk_pid72396 00:31:28.213 Removing: /var/run/dpdk/spdk_pid72496 00:31:28.213 Removing: /var/run/dpdk/spdk_pid72538 00:31:28.213 Removing: /var/run/dpdk/spdk_pid72569 00:31:28.213 Removing: /var/run/dpdk/spdk_pid72642 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73251 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73282 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73733 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73826 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73924 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73961 00:31:28.213 Removing: /var/run/dpdk/spdk_pid73986 00:31:28.213 Removing: /var/run/dpdk/spdk_pid74006 00:31:28.213 Removing: /var/run/dpdk/spdk_pid75821 00:31:28.213 Removing: /var/run/dpdk/spdk_pid75942 00:31:28.213 Removing: /var/run/dpdk/spdk_pid75946 00:31:28.213 Removing: /var/run/dpdk/spdk_pid75963 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76009 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76013 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76025 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76070 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76074 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76086 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76131 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76135 00:31:28.213 Removing: /var/run/dpdk/spdk_pid76147 00:31:28.213 Removing: /var/run/dpdk/spdk_pid77516 00:31:28.213 Removing: /var/run/dpdk/spdk_pid77606 00:31:28.213 Removing: /var/run/dpdk/spdk_pid78997 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80391 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80457 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80511 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80560 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80637 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80704 00:31:28.213 Removing: /var/run/dpdk/spdk_pid80841 00:31:28.213 Removing: /var/run/dpdk/spdk_pid81188 00:31:28.213 Removing: /var/run/dpdk/spdk_pid81213 00:31:28.213 Removing: /var/run/dpdk/spdk_pid81645 00:31:28.213 Removing: /var/run/dpdk/spdk_pid81824 00:31:28.213 Removing: /var/run/dpdk/spdk_pid81912 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82016 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82061 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82086 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82377 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82415 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82466 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82835 00:31:28.213 Removing: /var/run/dpdk/spdk_pid82979 00:31:28.213 Removing: /var/run/dpdk/spdk_pid83774 00:31:28.213 Removing: /var/run/dpdk/spdk_pid83890 00:31:28.213 Removing: /var/run/dpdk/spdk_pid84060 00:31:28.213 Removing: /var/run/dpdk/spdk_pid84146 00:31:28.213 Removing: /var/run/dpdk/spdk_pid84432 00:31:28.213 Removing: /var/run/dpdk/spdk_pid84713 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85062 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85222 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85353 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85389 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85555 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85569 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85611 00:31:28.474 Removing: /var/run/dpdk/spdk_pid85887 00:31:28.474 Removing: /var/run/dpdk/spdk_pid86106 00:31:28.474 Removing: /var/run/dpdk/spdk_pid86766 00:31:28.474 Removing: /var/run/dpdk/spdk_pid87446 00:31:28.474 Removing: /var/run/dpdk/spdk_pid87994 00:31:28.474 Removing: /var/run/dpdk/spdk_pid88777 00:31:28.474 Removing: /var/run/dpdk/spdk_pid88924 00:31:28.474 Removing: /var/run/dpdk/spdk_pid89005 00:31:28.474 Removing: /var/run/dpdk/spdk_pid89453 00:31:28.474 Removing: /var/run/dpdk/spdk_pid89507 00:31:28.474 Removing: /var/run/dpdk/spdk_pid90003 00:31:28.474 Removing: /var/run/dpdk/spdk_pid90463 00:31:28.474 Removing: /var/run/dpdk/spdk_pid91271 00:31:28.474 Removing: /var/run/dpdk/spdk_pid91394 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91430 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91483 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91534 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91588 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91776 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91846 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91908 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91964 00:31:28.475 Removing: /var/run/dpdk/spdk_pid91999 00:31:28.475 Removing: /var/run/dpdk/spdk_pid92060 00:31:28.475 Removing: /var/run/dpdk/spdk_pid92224 00:31:28.475 Removing: /var/run/dpdk/spdk_pid92443 00:31:28.475 Removing: /var/run/dpdk/spdk_pid93088 00:31:28.475 Removing: /var/run/dpdk/spdk_pid93724 00:31:28.475 Removing: /var/run/dpdk/spdk_pid94354 00:31:28.475 Removing: /var/run/dpdk/spdk_pid95043 00:31:28.475 Clean 00:31:28.475 23:44:14 -- common/autotest_common.sh@1453 -- # return 0 00:31:28.475 23:44:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:28.475 23:44:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.475 23:44:14 -- common/autotest_common.sh@10 -- # set +x 00:31:28.475 23:44:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:28.475 23:44:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.475 23:44:14 -- common/autotest_common.sh@10 -- # set +x 00:31:28.475 23:44:14 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:28.475 23:44:14 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:28.475 23:44:14 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:28.475 23:44:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:28.475 23:44:14 -- spdk/autotest.sh@398 -- # hostname 00:31:28.475 23:44:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:28.735 geninfo: WARNING: invalid characters removed from testname! 00:31:55.345 23:44:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.345 23:44:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:57.263 23:44:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:59.808 23:44:45 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:02.355 23:44:47 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:04.900 23:44:50 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:06.813 23:44:52 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:06.813 23:44:52 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:06.813 23:44:52 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:06.813 23:44:52 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:06.813 23:44:52 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:06.814 23:44:52 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:06.814 + [[ -n 5761 ]] 00:32:06.814 + sudo kill 5761 00:32:06.825 [Pipeline] } 00:32:06.841 [Pipeline] // timeout 00:32:06.847 [Pipeline] } 00:32:06.864 [Pipeline] // stage 00:32:06.870 [Pipeline] } 00:32:06.884 [Pipeline] // catchError 00:32:06.894 [Pipeline] stage 00:32:06.897 [Pipeline] { (Stop VM) 00:32:06.910 [Pipeline] sh 00:32:07.196 + vagrant halt 00:32:09.739 ==> default: Halting domain... 00:32:15.042 [Pipeline] sh 00:32:15.325 + vagrant destroy -f 00:32:17.873 ==> default: Removing domain... 00:32:18.831 [Pipeline] sh 00:32:19.265 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:19.276 [Pipeline] } 00:32:19.289 [Pipeline] // stage 00:32:19.295 [Pipeline] } 00:32:19.308 [Pipeline] // dir 00:32:19.314 [Pipeline] } 00:32:19.329 [Pipeline] // wrap 00:32:19.337 [Pipeline] } 00:32:19.349 [Pipeline] // catchError 00:32:19.359 [Pipeline] stage 00:32:19.361 [Pipeline] { (Epilogue) 00:32:19.373 [Pipeline] sh 00:32:19.661 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:24.957 [Pipeline] catchError 00:32:24.959 [Pipeline] { 00:32:24.972 [Pipeline] sh 00:32:25.257 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:25.257 Artifacts sizes are good 00:32:25.268 [Pipeline] } 00:32:25.283 [Pipeline] // catchError 00:32:25.297 [Pipeline] archiveArtifacts 00:32:25.308 Archiving artifacts 00:32:25.425 [Pipeline] cleanWs 00:32:25.435 [WS-CLEANUP] Deleting project workspace... 00:32:25.435 [WS-CLEANUP] Deferred wipeout is used... 00:32:25.441 [WS-CLEANUP] done 00:32:25.443 [Pipeline] } 00:32:25.457 [Pipeline] // stage 00:32:25.462 [Pipeline] } 00:32:25.475 [Pipeline] // node 00:32:25.479 [Pipeline] End of Pipeline 00:32:25.518 Finished: SUCCESS